Aug 13 00:00:54.087030 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Aug 13 00:00:54.087049 kernel: Linux version 5.15.189-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Tue Aug 12 22:50:30 -00 2025 Aug 13 00:00:54.087057 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Aug 13 00:00:54.087064 kernel: printk: bootconsole [pl11] enabled Aug 13 00:00:54.087069 kernel: efi: EFI v2.70 by EDK II Aug 13 00:00:54.087074 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3ead8b98 RNG=0x3fd5f998 MEMRESERVE=0x3761cf98 Aug 13 00:00:54.087081 kernel: random: crng init done Aug 13 00:00:54.087086 kernel: ACPI: Early table checksum verification disabled Aug 13 00:00:54.087091 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Aug 13 00:00:54.087097 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:00:54.087102 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:00:54.087107 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Aug 13 00:00:54.087114 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:00:54.087119 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:00:54.087126 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:00:54.087132 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:00:54.087138 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:00:54.087145 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:00:54.087150 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Aug 13 00:00:54.087156 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:00:54.087162 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Aug 13 00:00:54.087167 kernel: NUMA: Failed to initialise from firmware Aug 13 00:00:54.087173 kernel: NUMA: Faking a node at [mem 0x0000000000000000-0x00000001bfffffff] Aug 13 00:00:54.087179 kernel: NUMA: NODE_DATA [mem 0x1bf7f3900-0x1bf7f8fff] Aug 13 00:00:54.087184 kernel: Zone ranges: Aug 13 00:00:54.087190 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Aug 13 00:00:54.087195 kernel: DMA32 empty Aug 13 00:00:54.087201 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Aug 13 00:00:54.087208 kernel: Movable zone start for each node Aug 13 00:00:54.087213 kernel: Early memory node ranges Aug 13 00:00:54.087219 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Aug 13 00:00:54.087224 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Aug 13 00:00:54.087230 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Aug 13 00:00:54.087236 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Aug 13 00:00:54.087241 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Aug 13 00:00:54.087247 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Aug 13 00:00:54.087252 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Aug 13 00:00:54.087258 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Aug 13 00:00:54.087264 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Aug 13 00:00:54.087270 kernel: psci: probing for conduit method from ACPI. Aug 13 00:00:54.087279 kernel: psci: PSCIv1.1 detected in firmware. Aug 13 00:00:54.087285 kernel: psci: Using standard PSCI v0.2 function IDs Aug 13 00:00:54.087291 kernel: psci: MIGRATE_INFO_TYPE not supported. Aug 13 00:00:54.087297 kernel: psci: SMC Calling Convention v1.4 Aug 13 00:00:54.087303 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node -1 Aug 13 00:00:54.087310 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node -1 Aug 13 00:00:54.087316 kernel: percpu: Embedded 30 pages/cpu s82968 r8192 d31720 u122880 Aug 13 00:00:54.087322 kernel: pcpu-alloc: s82968 r8192 d31720 u122880 alloc=30*4096 Aug 13 00:00:54.087328 kernel: pcpu-alloc: [0] 0 [0] 1 Aug 13 00:00:54.087334 kernel: Detected PIPT I-cache on CPU0 Aug 13 00:00:54.087340 kernel: CPU features: detected: GIC system register CPU interface Aug 13 00:00:54.087346 kernel: CPU features: detected: Hardware dirty bit management Aug 13 00:00:54.087352 kernel: CPU features: detected: Spectre-BHB Aug 13 00:00:54.087358 kernel: CPU features: kernel page table isolation forced ON by KASLR Aug 13 00:00:54.087364 kernel: CPU features: detected: Kernel page table isolation (KPTI) Aug 13 00:00:54.087371 kernel: CPU features: detected: ARM erratum 1418040 Aug 13 00:00:54.087378 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Aug 13 00:00:54.087384 kernel: CPU features: detected: SSBS not fully self-synchronizing Aug 13 00:00:54.087390 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Aug 13 00:00:54.087396 kernel: Policy zone: Normal Aug 13 00:00:54.087403 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=32404c0887e5b8a80b0f069916a8040bfd969c7a8f47a2db1168b24bc04220cc Aug 13 00:00:54.087410 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 00:00:54.087416 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 13 00:00:54.087422 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 13 00:00:54.087428 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 00:00:54.087434 kernel: software IO TLB: mapped [mem 0x000000003a550000-0x000000003e550000] (64MB) Aug 13 00:00:54.087441 kernel: Memory: 3986880K/4194160K available (9792K kernel code, 2094K rwdata, 7588K rodata, 36416K init, 777K bss, 207280K reserved, 0K cma-reserved) Aug 13 00:00:54.087448 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 13 00:00:54.087454 kernel: trace event string verifier disabled Aug 13 00:00:54.087460 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 13 00:00:54.087467 kernel: rcu: RCU event tracing is enabled. Aug 13 00:00:54.087473 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 13 00:00:54.087479 kernel: Trampoline variant of Tasks RCU enabled. Aug 13 00:00:54.087485 kernel: Tracing variant of Tasks RCU enabled. Aug 13 00:00:54.087492 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 00:00:54.087498 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 13 00:00:54.087504 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Aug 13 00:00:54.087510 kernel: GICv3: 960 SPIs implemented Aug 13 00:00:54.087517 kernel: GICv3: 0 Extended SPIs implemented Aug 13 00:00:54.087523 kernel: GICv3: Distributor has no Range Selector support Aug 13 00:00:54.087528 kernel: Root IRQ handler: gic_handle_irq Aug 13 00:00:54.087534 kernel: GICv3: 16 PPIs implemented Aug 13 00:00:54.087540 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Aug 13 00:00:54.087546 kernel: ITS: No ITS available, not enabling LPIs Aug 13 00:00:54.087552 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 13 00:00:54.087559 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Aug 13 00:00:54.087565 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Aug 13 00:00:54.087571 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Aug 13 00:00:54.087577 kernel: Console: colour dummy device 80x25 Aug 13 00:00:54.087585 kernel: printk: console [tty1] enabled Aug 13 00:00:54.087591 kernel: ACPI: Core revision 20210730 Aug 13 00:00:54.087598 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Aug 13 00:00:54.087604 kernel: pid_max: default: 32768 minimum: 301 Aug 13 00:00:54.087610 kernel: LSM: Security Framework initializing Aug 13 00:00:54.087616 kernel: SELinux: Initializing. Aug 13 00:00:54.087623 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 00:00:54.087629 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 00:00:54.087635 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Aug 13 00:00:54.087643 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 Aug 13 00:00:54.087649 kernel: rcu: Hierarchical SRCU implementation. Aug 13 00:00:54.087655 kernel: Remapping and enabling EFI services. Aug 13 00:00:54.087661 kernel: smp: Bringing up secondary CPUs ... Aug 13 00:00:54.087667 kernel: Detected PIPT I-cache on CPU1 Aug 13 00:00:54.087674 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Aug 13 00:00:54.087689 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 13 00:00:54.087696 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Aug 13 00:00:54.087702 kernel: smp: Brought up 1 node, 2 CPUs Aug 13 00:00:54.087708 kernel: SMP: Total of 2 processors activated. Aug 13 00:00:54.087716 kernel: CPU features: detected: 32-bit EL0 Support Aug 13 00:00:54.087723 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Aug 13 00:00:54.087729 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Aug 13 00:00:54.087735 kernel: CPU features: detected: CRC32 instructions Aug 13 00:00:54.087741 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Aug 13 00:00:54.087748 kernel: CPU features: detected: LSE atomic instructions Aug 13 00:00:54.087754 kernel: CPU features: detected: Privileged Access Never Aug 13 00:00:54.087761 kernel: CPU: All CPU(s) started at EL1 Aug 13 00:00:54.087767 kernel: alternatives: patching kernel code Aug 13 00:00:54.087774 kernel: devtmpfs: initialized Aug 13 00:00:54.087785 kernel: KASLR enabled Aug 13 00:00:54.087792 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 00:00:54.087800 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 13 00:00:54.087806 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 00:00:54.087813 kernel: SMBIOS 3.1.0 present. Aug 13 00:00:54.087819 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Aug 13 00:00:54.087826 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 00:00:54.087833 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Aug 13 00:00:54.087840 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Aug 13 00:00:54.087847 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Aug 13 00:00:54.087853 kernel: audit: initializing netlink subsys (disabled) Aug 13 00:00:54.087860 kernel: audit: type=2000 audit(0.098:1): state=initialized audit_enabled=0 res=1 Aug 13 00:00:54.087866 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 00:00:54.087873 kernel: cpuidle: using governor menu Aug 13 00:00:54.087879 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Aug 13 00:00:54.087887 kernel: ASID allocator initialised with 32768 entries Aug 13 00:00:54.087893 kernel: ACPI: bus type PCI registered Aug 13 00:00:54.087900 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 00:00:54.087907 kernel: Serial: AMBA PL011 UART driver Aug 13 00:00:54.087913 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Aug 13 00:00:54.087920 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Aug 13 00:00:54.087926 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 00:00:54.087933 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Aug 13 00:00:54.087939 kernel: cryptd: max_cpu_qlen set to 1000 Aug 13 00:00:54.087948 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Aug 13 00:00:54.087954 kernel: ACPI: Added _OSI(Module Device) Aug 13 00:00:54.087961 kernel: ACPI: Added _OSI(Processor Device) Aug 13 00:00:54.087968 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 00:00:54.087974 kernel: ACPI: Added _OSI(Linux-Dell-Video) Aug 13 00:00:54.087981 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Aug 13 00:00:54.087987 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Aug 13 00:00:54.087994 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 13 00:00:54.088000 kernel: ACPI: Interpreter enabled Aug 13 00:00:54.088008 kernel: ACPI: Using GIC for interrupt routing Aug 13 00:00:54.088015 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Aug 13 00:00:54.088021 kernel: printk: console [ttyAMA0] enabled Aug 13 00:00:54.088028 kernel: printk: bootconsole [pl11] disabled Aug 13 00:00:54.088034 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Aug 13 00:00:54.088041 kernel: iommu: Default domain type: Translated Aug 13 00:00:54.088047 kernel: iommu: DMA domain TLB invalidation policy: strict mode Aug 13 00:00:54.088054 kernel: vgaarb: loaded Aug 13 00:00:54.088060 kernel: pps_core: LinuxPPS API ver. 1 registered Aug 13 00:00:54.088067 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Aug 13 00:00:54.088075 kernel: PTP clock support registered Aug 13 00:00:54.088081 kernel: Registered efivars operations Aug 13 00:00:54.088088 kernel: No ACPI PMU IRQ for CPU0 Aug 13 00:00:54.088094 kernel: No ACPI PMU IRQ for CPU1 Aug 13 00:00:54.088101 kernel: clocksource: Switched to clocksource arch_sys_counter Aug 13 00:00:54.088107 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 00:00:54.088114 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 00:00:54.088120 kernel: pnp: PnP ACPI init Aug 13 00:00:54.088127 kernel: pnp: PnP ACPI: found 0 devices Aug 13 00:00:54.088135 kernel: NET: Registered PF_INET protocol family Aug 13 00:00:54.088141 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 13 00:00:54.088148 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 13 00:00:54.088154 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 00:00:54.088161 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 13 00:00:54.088168 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Aug 13 00:00:54.088174 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 13 00:00:54.088181 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 00:00:54.088188 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 00:00:54.088195 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 00:00:54.088201 kernel: PCI: CLS 0 bytes, default 64 Aug 13 00:00:54.088208 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Aug 13 00:00:54.088215 kernel: kvm [1]: HYP mode not available Aug 13 00:00:54.088221 kernel: Initialise system trusted keyrings Aug 13 00:00:54.088228 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 13 00:00:54.088234 kernel: Key type asymmetric registered Aug 13 00:00:54.088241 kernel: Asymmetric key parser 'x509' registered Aug 13 00:00:54.088248 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Aug 13 00:00:54.088255 kernel: io scheduler mq-deadline registered Aug 13 00:00:54.088261 kernel: io scheduler kyber registered Aug 13 00:00:54.088267 kernel: io scheduler bfq registered Aug 13 00:00:54.088274 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 00:00:54.088280 kernel: thunder_xcv, ver 1.0 Aug 13 00:00:54.088287 kernel: thunder_bgx, ver 1.0 Aug 13 00:00:54.088293 kernel: nicpf, ver 1.0 Aug 13 00:00:54.088300 kernel: nicvf, ver 1.0 Aug 13 00:00:54.088408 kernel: rtc-efi rtc-efi.0: registered as rtc0 Aug 13 00:00:54.088469 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-08-13T00:00:53 UTC (1755043253) Aug 13 00:00:54.088478 kernel: efifb: probing for efifb Aug 13 00:00:54.088485 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Aug 13 00:00:54.088492 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Aug 13 00:00:54.088498 kernel: efifb: scrolling: redraw Aug 13 00:00:54.088505 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Aug 13 00:00:54.088511 kernel: Console: switching to colour frame buffer device 128x48 Aug 13 00:00:54.088519 kernel: fb0: EFI VGA frame buffer device Aug 13 00:00:54.088526 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Aug 13 00:00:54.088532 kernel: hid: raw HID events driver (C) Jiri Kosina Aug 13 00:00:54.088539 kernel: NET: Registered PF_INET6 protocol family Aug 13 00:00:54.088545 kernel: Segment Routing with IPv6 Aug 13 00:00:54.088552 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 00:00:54.088558 kernel: NET: Registered PF_PACKET protocol family Aug 13 00:00:54.088565 kernel: Key type dns_resolver registered Aug 13 00:00:54.088571 kernel: registered taskstats version 1 Aug 13 00:00:54.088577 kernel: Loading compiled-in X.509 certificates Aug 13 00:00:54.088585 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.189-flatcar: 72b807ae6dac6ab18c2f4ab9460d3472cf28c19d' Aug 13 00:00:54.088592 kernel: Key type .fscrypt registered Aug 13 00:00:54.088598 kernel: Key type fscrypt-provisioning registered Aug 13 00:00:54.088605 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 13 00:00:54.088611 kernel: ima: Allocated hash algorithm: sha1 Aug 13 00:00:54.088618 kernel: ima: No architecture policies found Aug 13 00:00:54.088624 kernel: clk: Disabling unused clocks Aug 13 00:00:54.088631 kernel: Freeing unused kernel memory: 36416K Aug 13 00:00:54.088639 kernel: Run /init as init process Aug 13 00:00:54.088645 kernel: with arguments: Aug 13 00:00:54.088651 kernel: /init Aug 13 00:00:54.088658 kernel: with environment: Aug 13 00:00:54.088664 kernel: HOME=/ Aug 13 00:00:54.088670 kernel: TERM=linux Aug 13 00:00:54.088677 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 00:00:54.088697 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Aug 13 00:00:54.088708 systemd[1]: Detected virtualization microsoft. Aug 13 00:00:54.088715 systemd[1]: Detected architecture arm64. Aug 13 00:00:54.088722 systemd[1]: Running in initrd. Aug 13 00:00:54.088728 systemd[1]: No hostname configured, using default hostname. Aug 13 00:00:54.088735 systemd[1]: Hostname set to . Aug 13 00:00:54.088743 systemd[1]: Initializing machine ID from random generator. Aug 13 00:00:54.088749 systemd[1]: Queued start job for default target initrd.target. Aug 13 00:00:54.088756 systemd[1]: Started systemd-ask-password-console.path. Aug 13 00:00:54.088764 systemd[1]: Reached target cryptsetup.target. Aug 13 00:00:54.088771 systemd[1]: Reached target paths.target. Aug 13 00:00:54.088778 systemd[1]: Reached target slices.target. Aug 13 00:00:54.088785 systemd[1]: Reached target swap.target. Aug 13 00:00:54.088792 systemd[1]: Reached target timers.target. Aug 13 00:00:54.088800 systemd[1]: Listening on iscsid.socket. Aug 13 00:00:54.088807 systemd[1]: Listening on iscsiuio.socket. Aug 13 00:00:54.088814 systemd[1]: Listening on systemd-journald-audit.socket. Aug 13 00:00:54.088822 systemd[1]: Listening on systemd-journald-dev-log.socket. Aug 13 00:00:54.088829 systemd[1]: Listening on systemd-journald.socket. Aug 13 00:00:54.088836 systemd[1]: Listening on systemd-networkd.socket. Aug 13 00:00:54.088843 systemd[1]: Listening on systemd-udevd-control.socket. Aug 13 00:00:54.088850 systemd[1]: Listening on systemd-udevd-kernel.socket. Aug 13 00:00:54.088858 systemd[1]: Reached target sockets.target. Aug 13 00:00:54.088865 systemd[1]: Starting kmod-static-nodes.service... Aug 13 00:00:54.088872 systemd[1]: Finished network-cleanup.service. Aug 13 00:00:54.088879 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 00:00:54.088887 systemd[1]: Starting systemd-journald.service... Aug 13 00:00:54.088894 systemd[1]: Starting systemd-modules-load.service... Aug 13 00:00:54.088901 systemd[1]: Starting systemd-resolved.service... Aug 13 00:00:54.088911 systemd-journald[276]: Journal started Aug 13 00:00:54.088948 systemd-journald[276]: Runtime Journal (/run/log/journal/44746116d2924a9b80c713dcdc130d56) is 8.0M, max 78.5M, 70.5M free. Aug 13 00:00:54.077645 systemd-modules-load[277]: Inserted module 'overlay' Aug 13 00:00:54.123704 systemd[1]: Starting systemd-vconsole-setup.service... Aug 13 00:00:54.123733 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 00:00:54.130656 kernel: Bridge firewalling registered Aug 13 00:00:54.130807 systemd-modules-load[277]: Inserted module 'br_netfilter' Aug 13 00:00:54.161629 systemd[1]: Started systemd-journald.service. Aug 13 00:00:54.161650 kernel: SCSI subsystem initialized Aug 13 00:00:54.150662 systemd-resolved[278]: Positive Trust Anchors: Aug 13 00:00:54.150670 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:00:54.150731 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Aug 13 00:00:54.265775 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 00:00:54.265800 kernel: audit: type=1130 audit(1755043254.179:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:00:54.265816 kernel: device-mapper: uevent: version 1.0.3 Aug 13 00:00:54.265825 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Aug 13 00:00:54.179000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:00:54.152916 systemd-resolved[278]: Defaulting to hostname 'linux'. Aug 13 00:00:54.268000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:00:54.180536 systemd[1]: Started systemd-resolved.service. Aug 13 00:00:54.263764 systemd-modules-load[277]: Inserted module 'dm_multipath' Aug 13 00:00:54.292075 systemd[1]: Finished kmod-static-nodes.service. Aug 13 00:00:54.332313 kernel: audit: type=1130 audit(1755043254.268:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:00:54.332334 kernel: audit: type=1130 audit(1755043254.300:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:00:54.300000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:00:54.300994 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 00:00:54.357420 kernel: audit: type=1130 audit(1755043254.327:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:00:54.327000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:00:54.328836 systemd[1]: Finished systemd-modules-load.service. Aug 13 00:00:54.382913 kernel: audit: type=1130 audit(1755043254.352:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:00:54.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:00:54.355995 systemd[1]: Finished systemd-vconsole-setup.service. Aug 13 00:00:54.383901 systemd[1]: Reached target nss-lookup.target. Aug 13 00:00:54.412892 kernel: audit: type=1130 audit(1755043254.382:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:00:54.382000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:00:54.413496 systemd[1]: Starting dracut-cmdline-ask.service... Aug 13 00:00:54.418936 systemd[1]: Starting systemd-sysctl.service... Aug 13 00:00:54.435253 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Aug 13 00:00:54.450159 systemd[1]: Finished dracut-cmdline-ask.service. Aug 13 00:00:54.460000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:00:54.461288 systemd[1]: Finished systemd-sysctl.service. Aug 13 00:00:54.488059 kernel: audit: type=1130 audit(1755043254.460:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:00:54.493000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:00:54.494945 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Aug 13 00:00:54.524527 kernel: audit: type=1130 audit(1755043254.493:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:00:54.519633 systemd[1]: Starting dracut-cmdline.service... Aug 13 00:00:54.555017 kernel: audit: type=1130 audit(1755043254.518:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:00:54.518000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:00:54.560258 dracut-cmdline[299]: dracut-dracut-053 Aug 13 00:00:54.566467 dracut-cmdline[299]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=32404c0887e5b8a80b0f069916a8040bfd969c7a8f47a2db1168b24bc04220cc Aug 13 00:00:54.658708 kernel: Loading iSCSI transport class v2.0-870. Aug 13 00:00:54.674697 kernel: iscsi: registered transport (tcp) Aug 13 00:00:54.696406 kernel: iscsi: registered transport (qla4xxx) Aug 13 00:00:54.696459 kernel: QLogic iSCSI HBA Driver Aug 13 00:00:54.726022 systemd[1]: Finished dracut-cmdline.service. Aug 13 00:00:54.730000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:00:54.731818 systemd[1]: Starting dracut-pre-udev.service... Aug 13 00:00:54.785704 kernel: raid6: neonx8 gen() 13722 MB/s Aug 13 00:00:54.806691 kernel: raid6: neonx8 xor() 10837 MB/s Aug 13 00:00:54.827697 kernel: raid6: neonx4 gen() 13549 MB/s Aug 13 00:00:54.865690 kernel: raid6: neonx4 xor() 11156 MB/s Aug 13 00:00:54.874700 kernel: raid6: neonx2 gen() 12963 MB/s Aug 13 00:00:54.891691 kernel: raid6: neonx2 xor() 10275 MB/s Aug 13 00:00:54.913692 kernel: raid6: neonx1 gen() 10665 MB/s Aug 13 00:00:54.934691 kernel: raid6: neonx1 xor() 8792 MB/s Aug 13 00:00:54.955690 kernel: raid6: int64x8 gen() 6272 MB/s Aug 13 00:00:54.977691 kernel: raid6: int64x8 xor() 3544 MB/s Aug 13 00:00:54.998691 kernel: raid6: int64x4 gen() 7208 MB/s Aug 13 00:00:55.019691 kernel: raid6: int64x4 xor() 3859 MB/s Aug 13 00:00:55.041691 kernel: raid6: int64x2 gen() 6152 MB/s Aug 13 00:00:55.061690 kernel: raid6: int64x2 xor() 3320 MB/s Aug 13 00:00:55.081691 kernel: raid6: int64x1 gen() 5046 MB/s Aug 13 00:00:55.108716 kernel: raid6: int64x1 xor() 2646 MB/s Aug 13 00:00:55.108731 kernel: raid6: using algorithm neonx8 gen() 13722 MB/s Aug 13 00:00:55.108740 kernel: raid6: .... xor() 10837 MB/s, rmw enabled Aug 13 00:00:55.113363 kernel: raid6: using neon recovery algorithm Aug 13 00:00:55.135350 kernel: xor: measuring software checksum speed Aug 13 00:00:55.135363 kernel: 8regs : 17246 MB/sec Aug 13 00:00:55.139638 kernel: 32regs : 20697 MB/sec Aug 13 00:00:55.149564 kernel: arm64_neon : 26113 MB/sec Aug 13 00:00:55.149576 kernel: xor: using function: arm64_neon (26113 MB/sec) Aug 13 00:00:55.206700 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Aug 13 00:00:55.216139 systemd[1]: Finished dracut-pre-udev.service. Aug 13 00:00:55.221000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:00:55.225000 audit: BPF prog-id=7 op=LOAD Aug 13 00:00:55.225000 audit: BPF prog-id=8 op=LOAD Aug 13 00:00:55.226365 systemd[1]: Starting systemd-udevd.service... Aug 13 00:00:55.245625 systemd-udevd[475]: Using default interface naming scheme 'v252'. Aug 13 00:00:55.258000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:00:55.252448 systemd[1]: Started systemd-udevd.service. Aug 13 00:00:55.264420 systemd[1]: Starting dracut-pre-trigger.service... Aug 13 00:00:55.280069 dracut-pre-trigger[496]: rd.md=0: removing MD RAID activation Aug 13 00:00:55.316061 systemd[1]: Finished dracut-pre-trigger.service. Aug 13 00:00:55.321000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:00:55.322334 systemd[1]: Starting systemd-udev-trigger.service... Aug 13 00:00:55.358796 systemd[1]: Finished systemd-udev-trigger.service. Aug 13 00:00:55.370000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:00:55.413051 kernel: hv_vmbus: Vmbus version:5.3 Aug 13 00:00:55.423708 kernel: hv_vmbus: registering driver hid_hyperv Aug 13 00:00:55.442341 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Aug 13 00:00:55.442389 kernel: hid-generic 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Aug 13 00:00:55.451713 kernel: hv_vmbus: registering driver hv_storvsc Aug 13 00:00:55.451758 kernel: hv_vmbus: registering driver hv_netvsc Aug 13 00:00:55.465918 kernel: hv_vmbus: registering driver hyperv_keyboard Aug 13 00:00:55.465987 kernel: scsi host1: storvsc_host_t Aug 13 00:00:55.470512 kernel: scsi host0: storvsc_host_t Aug 13 00:00:55.471702 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Aug 13 00:00:55.492128 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Aug 13 00:00:55.500698 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Aug 13 00:00:55.521450 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Aug 13 00:00:55.534068 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Aug 13 00:00:55.534092 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Aug 13 00:00:55.561517 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Aug 13 00:00:55.561630 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Aug 13 00:00:55.561737 kernel: sd 0:0:0:0: [sda] Write Protect is off Aug 13 00:00:55.561815 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Aug 13 00:00:55.561891 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Aug 13 00:00:55.561967 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 00:00:55.561984 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Aug 13 00:00:55.617866 kernel: hv_netvsc 000d3a6e-6005-000d-3a6e-6005000d3a6e eth0: VF slot 1 added Aug 13 00:00:55.626709 kernel: hv_vmbus: registering driver hv_pci Aug 13 00:00:55.635702 kernel: hv_pci a6933294-aaa5-445e-9b6f-f16dd0a0e176: PCI VMBus probing: Using version 0x10004 Aug 13 00:00:55.724736 kernel: hv_pci a6933294-aaa5-445e-9b6f-f16dd0a0e176: PCI host bridge to bus aaa5:00 Aug 13 00:00:55.724829 kernel: pci_bus aaa5:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Aug 13 00:00:55.724927 kernel: pci_bus aaa5:00: No busn resource found for root bus, will use [bus 00-ff] Aug 13 00:00:55.724996 kernel: pci aaa5:00:02.0: [15b3:1018] type 00 class 0x020000 Aug 13 00:00:55.725086 kernel: pci aaa5:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Aug 13 00:00:55.725173 kernel: pci aaa5:00:02.0: enabling Extended Tags Aug 13 00:00:55.725255 kernel: pci aaa5:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at aaa5:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Aug 13 00:00:55.725332 kernel: pci_bus aaa5:00: busn_res: [bus 00-ff] end is updated to 00 Aug 13 00:00:55.725403 kernel: pci aaa5:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Aug 13 00:00:55.762072 kernel: mlx5_core aaa5:00:02.0: enabling device (0000 -> 0002) Aug 13 00:00:56.010580 kernel: mlx5_core aaa5:00:02.0: firmware version: 16.30.1284 Aug 13 00:00:56.010724 kernel: mlx5_core aaa5:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0) Aug 13 00:00:56.010817 kernel: hv_netvsc 000d3a6e-6005-000d-3a6e-6005000d3a6e eth0: VF registering: eth1 Aug 13 00:00:56.010903 kernel: mlx5_core aaa5:00:02.0 eth1: joined to eth0 Aug 13 00:00:56.010979 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (534) Aug 13 00:00:55.988608 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Aug 13 00:00:56.025696 kernel: mlx5_core aaa5:00:02.0 enP43685s1: renamed from eth1 Aug 13 00:00:56.028123 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Aug 13 00:00:56.203112 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Aug 13 00:00:56.218975 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Aug 13 00:00:56.225449 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Aug 13 00:00:56.241146 systemd[1]: Starting disk-uuid.service... Aug 13 00:00:56.270185 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 00:00:57.287706 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 00:00:57.288032 disk-uuid[601]: The operation has completed successfully. Aug 13 00:00:57.360965 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 00:00:57.361864 systemd[1]: Finished disk-uuid.service. Aug 13 00:00:57.369000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:00:57.369000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:00:57.379004 systemd[1]: Starting verity-setup.service... Aug 13 00:00:57.417720 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Aug 13 00:00:57.708626 systemd[1]: Found device dev-mapper-usr.device. Aug 13 00:00:57.715041 systemd[1]: Mounting sysusr-usr.mount... Aug 13 00:00:57.724170 systemd[1]: Finished verity-setup.service. Aug 13 00:00:57.731000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:00:57.788717 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Aug 13 00:00:57.789220 systemd[1]: Mounted sysusr-usr.mount. Aug 13 00:00:57.793480 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Aug 13 00:00:57.794246 systemd[1]: Starting ignition-setup.service... Aug 13 00:00:57.801791 systemd[1]: Starting parse-ip-for-networkd.service... Aug 13 00:00:57.840900 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Aug 13 00:00:57.840957 kernel: BTRFS info (device sda6): using free space tree Aug 13 00:00:57.840967 kernel: BTRFS info (device sda6): has skinny extents Aug 13 00:00:57.881609 systemd[1]: Finished parse-ip-for-networkd.service. Aug 13 00:00:57.886000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:00:57.890000 audit: BPF prog-id=9 op=LOAD Aug 13 00:00:57.891744 systemd[1]: Starting systemd-networkd.service... Aug 13 00:00:57.913733 systemd-networkd[865]: lo: Link UP Aug 13 00:00:57.913747 systemd-networkd[865]: lo: Gained carrier Aug 13 00:00:57.922000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:00:57.914190 systemd-networkd[865]: Enumeration completed Aug 13 00:00:57.917725 systemd[1]: Started systemd-networkd.service. Aug 13 00:00:57.917964 systemd-networkd[865]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:00:57.923486 systemd[1]: Reached target network.target. Aug 13 00:00:57.955000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:00:57.932944 systemd[1]: Starting iscsiuio.service... Aug 13 00:00:57.945257 systemd[1]: Started iscsiuio.service. Aug 13 00:00:57.957227 systemd[1]: Starting iscsid.service... Aug 13 00:00:57.975000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:00:57.984129 iscsid[870]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Aug 13 00:00:57.984129 iscsid[870]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Aug 13 00:00:57.984129 iscsid[870]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Aug 13 00:00:57.984129 iscsid[870]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Aug 13 00:00:57.984129 iscsid[870]: If using hardware iscsi like qla4xxx this message can be ignored. Aug 13 00:00:57.984129 iscsid[870]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Aug 13 00:00:57.984129 iscsid[870]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Aug 13 00:00:58.123740 kernel: kauditd_printk_skb: 14 callbacks suppressed Aug 13 00:00:58.123768 kernel: audit: type=1130 audit(1755043257.975:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:00:58.123779 kernel: audit: type=1130 audit(1755043258.039:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:00:58.039000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:00:57.968812 systemd[1]: Started iscsid.service. Aug 13 00:00:58.128000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:00:57.985908 systemd[1]: Starting dracut-initqueue.service... Aug 13 00:00:58.154813 kernel: audit: type=1130 audit(1755043258.128:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:00:58.015408 systemd[1]: Finished dracut-initqueue.service. Aug 13 00:00:58.070272 systemd[1]: Reached target remote-fs-pre.target. Aug 13 00:00:58.077594 systemd[1]: Reached target remote-cryptsetup.target. Aug 13 00:00:58.085769 systemd[1]: Reached target remote-fs.target. Aug 13 00:00:58.177630 kernel: mlx5_core aaa5:00:02.0 enP43685s1: Link up Aug 13 00:00:58.100606 systemd[1]: Starting dracut-pre-mount.service... Aug 13 00:00:58.115262 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 13 00:00:58.123661 systemd[1]: Finished dracut-pre-mount.service. Aug 13 00:00:58.198759 systemd[1]: Finished ignition-setup.service. Aug 13 00:00:58.203000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:00:58.228706 kernel: audit: type=1130 audit(1755043258.203:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:00:58.228741 kernel: hv_netvsc 000d3a6e-6005-000d-3a6e-6005000d3a6e eth0: Data path switched to VF: enP43685s1 Aug 13 00:00:58.233853 systemd[1]: Starting ignition-fetch-offline.service... Aug 13 00:00:58.254351 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Aug 13 00:00:58.236309 systemd-networkd[865]: enP43685s1: Link UP Aug 13 00:00:58.236390 systemd-networkd[865]: eth0: Link UP Aug 13 00:00:58.248765 systemd-networkd[865]: eth0: Gained carrier Aug 13 00:00:58.262011 systemd-networkd[865]: enP43685s1: Gained carrier Aug 13 00:00:58.275761 systemd-networkd[865]: eth0: DHCPv4 address 10.200.20.38/24, gateway 10.200.20.1 acquired from 168.63.129.16 Aug 13 00:00:59.615902 systemd-networkd[865]: eth0: Gained IPv6LL Aug 13 00:01:00.877482 ignition[893]: Ignition 2.14.0 Aug 13 00:01:00.877495 ignition[893]: Stage: fetch-offline Aug 13 00:01:00.877549 ignition[893]: reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 00:01:00.877573 ignition[893]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Aug 13 00:01:00.946415 ignition[893]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 00:01:00.946550 ignition[893]: parsed url from cmdline: "" Aug 13 00:01:00.946554 ignition[893]: no config URL provided Aug 13 00:01:00.946559 ignition[893]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 00:01:00.971000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:00.961895 systemd[1]: Finished ignition-fetch-offline.service. Aug 13 00:01:00.946567 ignition[893]: no config at "/usr/lib/ignition/user.ign" Aug 13 00:01:01.014043 kernel: audit: type=1130 audit(1755043260.971:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:00.999463 systemd[1]: Starting ignition-fetch.service... Aug 13 00:01:00.946572 ignition[893]: failed to fetch config: resource requires networking Aug 13 00:01:00.946971 ignition[893]: Ignition finished successfully Aug 13 00:01:01.011940 ignition[899]: Ignition 2.14.0 Aug 13 00:01:01.011947 ignition[899]: Stage: fetch Aug 13 00:01:01.012058 ignition[899]: reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 00:01:01.012077 ignition[899]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Aug 13 00:01:01.015120 ignition[899]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 00:01:01.015505 ignition[899]: parsed url from cmdline: "" Aug 13 00:01:01.015513 ignition[899]: no config URL provided Aug 13 00:01:01.015520 ignition[899]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 00:01:01.015532 ignition[899]: no config at "/usr/lib/ignition/user.ign" Aug 13 00:01:01.015570 ignition[899]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Aug 13 00:01:01.115806 ignition[899]: GET result: OK Aug 13 00:01:01.115881 ignition[899]: config has been read from IMDS userdata Aug 13 00:01:01.118909 unknown[899]: fetched base config from "system" Aug 13 00:01:01.129000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:01.115921 ignition[899]: parsing config with SHA512: 53de9e07ad58a489a3dcd9b2d9bcc89bdc058e8e82ea36764f64696dad6e865920fac7ced1685546732f6643fbe2a39655afef3c1941722acdb49b4eb2ff62aa Aug 13 00:01:01.118921 unknown[899]: fetched base config from "system" Aug 13 00:01:01.175346 kernel: audit: type=1130 audit(1755043261.129:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:01.119478 ignition[899]: fetch: fetch complete Aug 13 00:01:01.118926 unknown[899]: fetched user config from "azure" Aug 13 00:01:01.119483 ignition[899]: fetch: fetch passed Aug 13 00:01:01.216913 kernel: audit: type=1130 audit(1755043261.189:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:01.189000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:01.124912 systemd[1]: Finished ignition-fetch.service. Aug 13 00:01:01.119519 ignition[899]: Ignition finished successfully Aug 13 00:01:01.152553 systemd[1]: Starting ignition-kargs.service... Aug 13 00:01:01.169564 ignition[905]: Ignition 2.14.0 Aug 13 00:01:01.184669 systemd[1]: Finished ignition-kargs.service. Aug 13 00:01:01.169570 ignition[905]: Stage: kargs Aug 13 00:01:01.241000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:01.190483 systemd[1]: Starting ignition-disks.service... Aug 13 00:01:01.272518 kernel: audit: type=1130 audit(1755043261.241:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:01.169671 ignition[905]: reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 00:01:01.234035 systemd[1]: Finished ignition-disks.service. Aug 13 00:01:01.169702 ignition[905]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Aug 13 00:01:01.260909 systemd[1]: Reached target initrd-root-device.target. Aug 13 00:01:01.177004 ignition[905]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 00:01:01.268424 systemd[1]: Reached target local-fs-pre.target. Aug 13 00:01:01.178268 ignition[905]: kargs: kargs passed Aug 13 00:01:01.277744 systemd[1]: Reached target local-fs.target. Aug 13 00:01:01.178329 ignition[905]: Ignition finished successfully Aug 13 00:01:01.289302 systemd[1]: Reached target sysinit.target. Aug 13 00:01:01.203195 ignition[911]: Ignition 2.14.0 Aug 13 00:01:01.298693 systemd[1]: Reached target basic.target. Aug 13 00:01:01.203201 ignition[911]: Stage: disks Aug 13 00:01:01.309050 systemd[1]: Starting systemd-fsck-root.service... Aug 13 00:01:01.203312 ignition[911]: reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 00:01:01.203332 ignition[911]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Aug 13 00:01:01.206061 ignition[911]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 00:01:01.232302 ignition[911]: disks: disks passed Aug 13 00:01:01.232370 ignition[911]: Ignition finished successfully Aug 13 00:01:01.379915 systemd-fsck[919]: ROOT: clean, 629/7326000 files, 481082/7359488 blocks Aug 13 00:01:01.395000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:01.389642 systemd[1]: Finished systemd-fsck-root.service. Aug 13 00:01:01.419299 kernel: audit: type=1130 audit(1755043261.395:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:01.414690 systemd[1]: Mounting sysroot.mount... Aug 13 00:01:01.455707 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Aug 13 00:01:01.456288 systemd[1]: Mounted sysroot.mount. Aug 13 00:01:01.460332 systemd[1]: Reached target initrd-root-fs.target. Aug 13 00:01:01.495352 systemd[1]: Mounting sysroot-usr.mount... Aug 13 00:01:01.500203 systemd[1]: Starting flatcar-metadata-hostname.service... Aug 13 00:01:01.507546 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 00:01:01.507601 systemd[1]: Reached target ignition-diskful.target. Aug 13 00:01:01.518986 systemd[1]: Mounted sysroot-usr.mount. Aug 13 00:01:01.592611 systemd[1]: Mounting sysroot-usr-share-oem.mount... Aug 13 00:01:01.598510 systemd[1]: Starting initrd-setup-root.service... Aug 13 00:01:01.629708 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (930) Aug 13 00:01:01.638312 initrd-setup-root[935]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 00:01:01.650243 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Aug 13 00:01:01.650265 kernel: BTRFS info (device sda6): using free space tree Aug 13 00:01:01.656402 kernel: BTRFS info (device sda6): has skinny extents Aug 13 00:01:01.663836 systemd[1]: Mounted sysroot-usr-share-oem.mount. Aug 13 00:01:01.683638 initrd-setup-root[961]: cut: /sysroot/etc/group: No such file or directory Aug 13 00:01:01.706583 initrd-setup-root[969]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 00:01:01.729224 initrd-setup-root[977]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 00:01:02.283976 systemd[1]: Finished initrd-setup-root.service. Aug 13 00:01:02.288000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:02.289966 systemd[1]: Starting ignition-mount.service... Aug 13 00:01:02.321721 kernel: audit: type=1130 audit(1755043262.288:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:02.321969 systemd[1]: Starting sysroot-boot.service... Aug 13 00:01:02.328402 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Aug 13 00:01:02.328495 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Aug 13 00:01:02.355530 ignition[997]: INFO : Ignition 2.14.0 Aug 13 00:01:02.360492 ignition[997]: INFO : Stage: mount Aug 13 00:01:02.360492 ignition[997]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 00:01:02.360492 ignition[997]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Aug 13 00:01:02.360492 ignition[997]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 00:01:02.378000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:02.403369 ignition[997]: INFO : mount: mount passed Aug 13 00:01:02.403369 ignition[997]: INFO : Ignition finished successfully Aug 13 00:01:02.373805 systemd[1]: Finished ignition-mount.service. Aug 13 00:01:02.423854 systemd[1]: Finished sysroot-boot.service. Aug 13 00:01:02.430000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:02.799901 coreos-metadata[929]: Aug 13 00:01:02.799 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Aug 13 00:01:02.810436 coreos-metadata[929]: Aug 13 00:01:02.810 INFO Fetch successful Aug 13 00:01:02.837108 coreos-metadata[929]: Aug 13 00:01:02.837 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Aug 13 00:01:02.850833 coreos-metadata[929]: Aug 13 00:01:02.850 INFO Fetch successful Aug 13 00:01:02.864467 coreos-metadata[929]: Aug 13 00:01:02.864 INFO wrote hostname ci-3510.3.8-a-af9fafecff to /sysroot/etc/hostname Aug 13 00:01:02.875227 systemd[1]: Finished flatcar-metadata-hostname.service. Aug 13 00:01:02.881000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:02.882361 systemd[1]: Starting ignition-files.service... Aug 13 00:01:02.899488 systemd[1]: Mounting sysroot-usr-share-oem.mount... Aug 13 00:01:02.930717 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1009) Aug 13 00:01:02.944946 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Aug 13 00:01:02.944985 kernel: BTRFS info (device sda6): using free space tree Aug 13 00:01:02.950286 kernel: BTRFS info (device sda6): has skinny extents Aug 13 00:01:02.957983 systemd[1]: Mounted sysroot-usr-share-oem.mount. Aug 13 00:01:02.972592 ignition[1028]: INFO : Ignition 2.14.0 Aug 13 00:01:02.972592 ignition[1028]: INFO : Stage: files Aug 13 00:01:02.984010 ignition[1028]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 00:01:02.984010 ignition[1028]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Aug 13 00:01:02.984010 ignition[1028]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 00:01:03.016102 ignition[1028]: DEBUG : files: compiled without relabeling support, skipping Aug 13 00:01:03.016102 ignition[1028]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 00:01:03.016102 ignition[1028]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 00:01:03.070555 ignition[1028]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 00:01:03.080338 ignition[1028]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 00:01:03.096022 unknown[1028]: wrote ssh authorized keys file for user: core Aug 13 00:01:03.102017 ignition[1028]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 00:01:03.109941 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Aug 13 00:01:03.109941 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Aug 13 00:01:03.109941 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Aug 13 00:01:03.109941 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Aug 13 00:01:03.298768 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Aug 13 00:01:03.595033 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Aug 13 00:01:03.616315 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 13 00:01:03.627719 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Aug 13 00:01:03.805231 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Aug 13 00:01:03.881555 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 13 00:01:03.892768 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Aug 13 00:01:03.903452 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 00:01:03.903452 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:01:03.903452 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:01:03.903452 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:01:03.903452 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:01:03.903452 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:01:03.903452 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:01:03.986994 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:01:03.986994 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:01:03.986994 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Aug 13 00:01:03.986994 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Aug 13 00:01:03.986994 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Aug 13 00:01:03.986994 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(c): oem config not found in "/usr/share/oem", looking on oem partition Aug 13 00:01:03.986994 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(c): op(d): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2789588515" Aug 13 00:01:03.986994 ignition[1028]: CRITICAL : files: createFilesystemsFiles: createFiles: op(c): op(d): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2789588515": device or resource busy Aug 13 00:01:03.986994 ignition[1028]: ERROR : files: createFilesystemsFiles: createFiles: op(c): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2789588515", trying btrfs: device or resource busy Aug 13 00:01:03.986994 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(c): op(e): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2789588515" Aug 13 00:01:03.986994 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(c): op(e): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2789588515" Aug 13 00:01:03.986994 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(c): op(f): [started] unmounting "/mnt/oem2789588515" Aug 13 00:01:03.986994 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(c): op(f): [finished] unmounting "/mnt/oem2789588515" Aug 13 00:01:03.986994 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Aug 13 00:01:03.953111 systemd[1]: mnt-oem2789588515.mount: Deactivated successfully. Aug 13 00:01:04.177299 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Aug 13 00:01:04.177299 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(10): oem config not found in "/usr/share/oem", looking on oem partition Aug 13 00:01:04.177299 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(11): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1077444466" Aug 13 00:01:04.177299 ignition[1028]: CRITICAL : files: createFilesystemsFiles: createFiles: op(10): op(11): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1077444466": device or resource busy Aug 13 00:01:04.177299 ignition[1028]: ERROR : files: createFilesystemsFiles: createFiles: op(10): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1077444466", trying btrfs: device or resource busy Aug 13 00:01:04.177299 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(12): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1077444466" Aug 13 00:01:04.177299 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(12): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1077444466" Aug 13 00:01:04.177299 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(13): [started] unmounting "/mnt/oem1077444466" Aug 13 00:01:04.177299 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(13): [finished] unmounting "/mnt/oem1077444466" Aug 13 00:01:04.177299 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Aug 13 00:01:04.177299 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(14): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Aug 13 00:01:04.177299 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(14): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Aug 13 00:01:04.349724 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(14): GET result: OK Aug 13 00:01:04.622069 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(14): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Aug 13 00:01:04.622069 ignition[1028]: INFO : files: op(15): [started] processing unit "waagent.service" Aug 13 00:01:04.622069 ignition[1028]: INFO : files: op(15): [finished] processing unit "waagent.service" Aug 13 00:01:04.622069 ignition[1028]: INFO : files: op(16): [started] processing unit "nvidia.service" Aug 13 00:01:04.622069 ignition[1028]: INFO : files: op(16): [finished] processing unit "nvidia.service" Aug 13 00:01:04.622069 ignition[1028]: INFO : files: op(17): [started] processing unit "containerd.service" Aug 13 00:01:04.718474 kernel: kauditd_printk_skb: 3 callbacks suppressed Aug 13 00:01:04.718497 kernel: audit: type=1130 audit(1755043264.649:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:04.649000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:04.718579 ignition[1028]: INFO : files: op(17): op(18): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Aug 13 00:01:04.718579 ignition[1028]: INFO : files: op(17): op(18): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Aug 13 00:01:04.718579 ignition[1028]: INFO : files: op(17): [finished] processing unit "containerd.service" Aug 13 00:01:04.718579 ignition[1028]: INFO : files: op(19): [started] processing unit "prepare-helm.service" Aug 13 00:01:04.718579 ignition[1028]: INFO : files: op(19): op(1a): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:01:04.718579 ignition[1028]: INFO : files: op(19): op(1a): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:01:04.718579 ignition[1028]: INFO : files: op(19): [finished] processing unit "prepare-helm.service" Aug 13 00:01:04.718579 ignition[1028]: INFO : files: op(1b): [started] setting preset to enabled for "prepare-helm.service" Aug 13 00:01:04.718579 ignition[1028]: INFO : files: op(1b): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 00:01:04.718579 ignition[1028]: INFO : files: op(1c): [started] setting preset to enabled for "waagent.service" Aug 13 00:01:04.718579 ignition[1028]: INFO : files: op(1c): [finished] setting preset to enabled for "waagent.service" Aug 13 00:01:04.718579 ignition[1028]: INFO : files: op(1d): [started] setting preset to enabled for "nvidia.service" Aug 13 00:01:04.718579 ignition[1028]: INFO : files: op(1d): [finished] setting preset to enabled for "nvidia.service" Aug 13 00:01:04.718579 ignition[1028]: INFO : files: createResultFile: createFiles: op(1e): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:01:04.718579 ignition[1028]: INFO : files: createResultFile: createFiles: op(1e): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:01:04.718579 ignition[1028]: INFO : files: files passed Aug 13 00:01:04.718579 ignition[1028]: INFO : Ignition finished successfully Aug 13 00:01:05.025826 kernel: audit: type=1130 audit(1755043264.724:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:05.025862 kernel: audit: type=1130 audit(1755043264.758:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:05.025872 kernel: audit: type=1131 audit(1755043264.758:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:05.025882 kernel: audit: type=1130 audit(1755043264.862:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:05.025891 kernel: audit: type=1131 audit(1755043264.886:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:05.025901 kernel: audit: type=1130 audit(1755043264.984:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:04.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:04.758000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:04.758000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:04.862000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:04.886000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:04.984000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:04.637640 systemd[1]: Finished ignition-files.service. Aug 13 00:01:04.652423 systemd[1]: Starting initrd-setup-root-after-ignition.service... Aug 13 00:01:05.040690 initrd-setup-root-after-ignition[1052]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:01:04.689375 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Aug 13 00:01:04.697478 systemd[1]: Starting ignition-quench.service... Aug 13 00:01:04.711304 systemd[1]: Finished initrd-setup-root-after-ignition.service. Aug 13 00:01:05.088000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:04.725270 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 00:01:04.725365 systemd[1]: Finished ignition-quench.service. Aug 13 00:01:05.125558 kernel: audit: type=1131 audit(1755043265.088:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:04.758798 systemd[1]: Reached target ignition-complete.target. Aug 13 00:01:04.820959 systemd[1]: Starting initrd-parse-etc.service... Aug 13 00:01:04.858127 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 00:01:04.858228 systemd[1]: Finished initrd-parse-etc.service. Aug 13 00:01:04.887529 systemd[1]: Reached target initrd-fs.target. Aug 13 00:01:04.916841 systemd[1]: Reached target initrd.target. Aug 13 00:01:04.929440 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Aug 13 00:01:04.937105 systemd[1]: Starting dracut-pre-pivot.service... Aug 13 00:01:04.980598 systemd[1]: Finished dracut-pre-pivot.service. Aug 13 00:01:05.019651 systemd[1]: Starting initrd-cleanup.service... Aug 13 00:01:05.038388 systemd[1]: Stopped target nss-lookup.target. Aug 13 00:01:05.224000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:05.045401 systemd[1]: Stopped target remote-cryptsetup.target. Aug 13 00:01:05.060612 systemd[1]: Stopped target timers.target. Aug 13 00:01:05.260288 kernel: audit: type=1131 audit(1755043265.224:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:05.078408 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 00:01:05.264000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:05.078483 systemd[1]: Stopped dracut-pre-pivot.service. Aug 13 00:01:05.111375 systemd[1]: Stopped target initrd.target. Aug 13 00:01:05.297000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:05.121405 systemd[1]: Stopped target basic.target. Aug 13 00:01:05.311006 kernel: audit: type=1131 audit(1755043265.264:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:05.306000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:05.129601 systemd[1]: Stopped target ignition-complete.target. Aug 13 00:01:05.316000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:05.138886 systemd[1]: Stopped target ignition-diskful.target. Aug 13 00:01:05.147815 systemd[1]: Stopped target initrd-root-device.target. Aug 13 00:01:05.158208 systemd[1]: Stopped target remote-fs.target. Aug 13 00:01:05.351000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:05.356605 ignition[1066]: INFO : Ignition 2.14.0 Aug 13 00:01:05.356605 ignition[1066]: INFO : Stage: umount Aug 13 00:01:05.356605 ignition[1066]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 00:01:05.356605 ignition[1066]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Aug 13 00:01:05.367000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:05.372000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:05.381000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:05.391000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:05.391000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:05.408000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:05.167105 systemd[1]: Stopped target remote-fs-pre.target. Aug 13 00:01:05.424046 ignition[1066]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 00:01:05.424046 ignition[1066]: INFO : umount: umount passed Aug 13 00:01:05.424046 ignition[1066]: INFO : Ignition finished successfully Aug 13 00:01:05.428000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:05.441000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:05.452000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:05.176074 systemd[1]: Stopped target sysinit.target. Aug 13 00:01:05.184593 systemd[1]: Stopped target local-fs.target. Aug 13 00:01:05.470000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:05.196537 systemd[1]: Stopped target local-fs-pre.target. Aug 13 00:01:05.206079 systemd[1]: Stopped target swap.target. Aug 13 00:01:05.215080 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 00:01:05.215147 systemd[1]: Stopped dracut-pre-mount.service. Aug 13 00:01:05.247359 systemd[1]: Stopped target cryptsetup.target. Aug 13 00:01:05.256431 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 00:01:05.256494 systemd[1]: Stopped dracut-initqueue.service. Aug 13 00:01:05.287930 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 00:01:05.538000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:05.287992 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Aug 13 00:01:05.298110 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 00:01:05.298155 systemd[1]: Stopped ignition-files.service. Aug 13 00:01:05.307186 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Aug 13 00:01:05.574000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:05.307235 systemd[1]: Stopped flatcar-metadata-hostname.service. Aug 13 00:01:05.586000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:05.321820 systemd[1]: Stopping ignition-mount.service... Aug 13 00:01:05.334240 systemd[1]: Stopping iscsiuio.service... Aug 13 00:01:05.605000 audit: BPF prog-id=6 op=UNLOAD Aug 13 00:01:05.611000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:05.346994 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 00:01:05.620000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:05.347077 systemd[1]: Stopped kmod-static-nodes.service. Aug 13 00:01:05.632000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:05.353925 systemd[1]: Stopping sysroot-boot.service... Aug 13 00:01:05.363182 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 00:01:05.363256 systemd[1]: Stopped systemd-udev-trigger.service. Aug 13 00:01:05.656000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:05.368492 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 00:01:05.368535 systemd[1]: Stopped dracut-pre-trigger.service. Aug 13 00:01:05.681000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:05.374084 systemd[1]: iscsiuio.service: Deactivated successfully. Aug 13 00:01:05.690000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:05.374214 systemd[1]: Stopped iscsiuio.service. Aug 13 00:01:05.700000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:05.381933 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 00:01:05.382035 systemd[1]: Finished initrd-cleanup.service. Aug 13 00:01:05.391950 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 00:01:05.734196 kernel: hv_netvsc 000d3a6e-6005-000d-3a6e-6005000d3a6e eth0: Data path switched from VF: enP43685s1 Aug 13 00:01:05.392048 systemd[1]: Stopped ignition-mount.service. Aug 13 00:01:05.739000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:05.410774 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 00:01:05.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:05.748000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:05.411229 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 00:01:05.411270 systemd[1]: Stopped ignition-disks.service. Aug 13 00:01:05.428498 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 00:01:05.428554 systemd[1]: Stopped ignition-kargs.service. Aug 13 00:01:05.442238 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 13 00:01:05.442301 systemd[1]: Stopped ignition-fetch.service. Aug 13 00:01:05.782000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:05.452553 systemd[1]: Stopped target network.target. Aug 13 00:01:05.461657 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 00:01:05.461731 systemd[1]: Stopped ignition-fetch-offline.service. Aug 13 00:01:05.471081 systemd[1]: Stopped target paths.target. Aug 13 00:01:05.479234 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 00:01:05.487484 systemd[1]: Stopped systemd-ask-password-console.path. Aug 13 00:01:05.492902 systemd[1]: Stopped target slices.target. Aug 13 00:01:05.503069 systemd[1]: Stopped target sockets.target. Aug 13 00:01:05.511854 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 00:01:05.511884 systemd[1]: Closed iscsid.socket. Aug 13 00:01:05.520984 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 00:01:05.521018 systemd[1]: Closed iscsiuio.socket. Aug 13 00:01:05.530654 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 00:01:05.530709 systemd[1]: Stopped ignition-setup.service. Aug 13 00:01:05.539226 systemd[1]: Stopping systemd-networkd.service... Aug 13 00:01:05.549535 systemd[1]: Stopping systemd-resolved.service... Aug 13 00:01:05.558725 systemd-networkd[865]: eth0: DHCPv6 lease lost Aug 13 00:01:05.866000 audit: BPF prog-id=9 op=UNLOAD Aug 13 00:01:05.564253 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 00:01:05.564370 systemd[1]: Stopped systemd-networkd.service. Aug 13 00:01:05.575422 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 00:01:05.888000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:05.575511 systemd[1]: Stopped systemd-resolved.service. Aug 13 00:01:05.587706 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 00:01:05.911000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:05.587754 systemd[1]: Closed systemd-networkd.socket. Aug 13 00:01:05.598081 systemd[1]: Stopping network-cleanup.service... Aug 13 00:01:05.606476 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 00:01:05.606547 systemd[1]: Stopped parse-ip-for-networkd.service. Aug 13 00:01:05.612268 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 00:01:05.612311 systemd[1]: Stopped systemd-sysctl.service. Aug 13 00:01:05.628028 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 00:01:05.628092 systemd[1]: Stopped systemd-modules-load.service. Aug 13 00:01:05.633663 systemd[1]: Stopping systemd-udevd.service... Aug 13 00:01:05.643911 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Aug 13 00:01:05.644451 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 00:01:05.644594 systemd[1]: Stopped systemd-udevd.service. Aug 13 00:01:05.658090 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 00:01:05.993000 audit: BPF prog-id=8 op=UNLOAD Aug 13 00:01:05.993000 audit: BPF prog-id=7 op=UNLOAD Aug 13 00:01:05.993000 audit: BPF prog-id=5 op=UNLOAD Aug 13 00:01:05.993000 audit: BPF prog-id=4 op=UNLOAD Aug 13 00:01:05.993000 audit: BPF prog-id=3 op=UNLOAD Aug 13 00:01:05.658147 systemd[1]: Closed systemd-udevd-control.socket. Aug 13 00:01:05.666898 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 00:01:05.666937 systemd[1]: Closed systemd-udevd-kernel.socket. Aug 13 00:01:05.671996 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 00:01:05.672068 systemd[1]: Stopped dracut-pre-udev.service. Aug 13 00:01:05.682181 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 00:01:05.682227 systemd[1]: Stopped dracut-cmdline.service. Aug 13 00:01:06.043057 systemd-journald[276]: Received SIGTERM from PID 1 (n/a). Aug 13 00:01:06.043095 iscsid[870]: iscsid shutting down. Aug 13 00:01:05.691244 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 00:01:05.691303 systemd[1]: Stopped dracut-cmdline-ask.service. Aug 13 00:01:05.701358 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Aug 13 00:01:05.726704 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:01:05.726798 systemd[1]: Stopped systemd-vconsole-setup.service. Aug 13 00:01:05.739957 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 00:01:05.740068 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Aug 13 00:01:05.773206 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 00:01:05.773417 systemd[1]: Stopped network-cleanup.service. Aug 13 00:01:05.880204 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 00:01:05.880329 systemd[1]: Stopped sysroot-boot.service. Aug 13 00:01:05.889161 systemd[1]: Reached target initrd-switch-root.target. Aug 13 00:01:05.901364 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 00:01:05.901437 systemd[1]: Stopped initrd-setup-root.service. Aug 13 00:01:05.913565 systemd[1]: Starting initrd-switch-root.service... Aug 13 00:01:05.993383 systemd[1]: Switching root. Aug 13 00:01:06.043567 systemd-journald[276]: Journal stopped Aug 13 00:01:36.245019 kernel: SELinux: Class mctp_socket not defined in policy. Aug 13 00:01:36.245041 kernel: SELinux: Class anon_inode not defined in policy. Aug 13 00:01:36.245052 kernel: SELinux: the above unknown classes and permissions will be allowed Aug 13 00:01:36.245062 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 00:01:36.245070 kernel: SELinux: policy capability open_perms=1 Aug 13 00:01:36.245078 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 00:01:36.245087 kernel: SELinux: policy capability always_check_network=0 Aug 13 00:01:36.245096 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 00:01:36.245105 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 00:01:36.245112 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 00:01:36.245120 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 00:01:36.245129 kernel: kauditd_printk_skb: 38 callbacks suppressed Aug 13 00:01:36.245138 kernel: audit: type=1403 audit(1755043271.606:86): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 13 00:01:36.245148 systemd[1]: Successfully loaded SELinux policy in 269.286ms. Aug 13 00:01:36.245159 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 21.872ms. Aug 13 00:01:36.245170 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Aug 13 00:01:36.245180 systemd[1]: Detected virtualization microsoft. Aug 13 00:01:36.245188 systemd[1]: Detected architecture arm64. Aug 13 00:01:36.245197 systemd[1]: Detected first boot. Aug 13 00:01:36.245207 systemd[1]: Hostname set to . Aug 13 00:01:36.245217 systemd[1]: Initializing machine ID from random generator. Aug 13 00:01:36.245226 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Aug 13 00:01:36.245238 kernel: audit: type=1400 audit(1755043273.728:87): avc: denied { associate } for pid=1117 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Aug 13 00:01:36.245248 kernel: audit: type=1300 audit(1755043273.728:87): arch=c00000b7 syscall=5 success=yes exit=0 a0=40000220ec a1=4000028060 a2=4000026040 a3=32 items=0 ppid=1100 pid=1117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:01:36.245257 kernel: audit: type=1327 audit(1755043273.728:87): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Aug 13 00:01:36.245267 kernel: audit: type=1400 audit(1755043273.743:88): avc: denied { associate } for pid=1117 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Aug 13 00:01:36.245276 kernel: audit: type=1300 audit(1755043273.743:88): arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=4000147689 a2=1ed a3=0 items=2 ppid=1100 pid=1117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:01:36.245286 kernel: audit: type=1307 audit(1755043273.743:88): cwd="/" Aug 13 00:01:36.245296 kernel: audit: type=1302 audit(1755043273.743:88): item=0 name=(null) inode=2 dev=00:2a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:01:36.245305 kernel: audit: type=1302 audit(1755043273.743:88): item=1 name=(null) inode=3 dev=00:2a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:01:36.245314 kernel: audit: type=1327 audit(1755043273.743:88): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Aug 13 00:01:36.245323 systemd[1]: Populated /etc with preset unit settings. Aug 13 00:01:36.245333 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 00:01:36.245342 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 00:01:36.245354 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:01:36.245363 systemd[1]: Queued start job for default target multi-user.target. Aug 13 00:01:36.245372 systemd[1]: Unnecessary job was removed for dev-sda6.device. Aug 13 00:01:36.245381 systemd[1]: Created slice system-addon\x2dconfig.slice. Aug 13 00:01:36.245391 systemd[1]: Created slice system-addon\x2drun.slice. Aug 13 00:01:36.245400 systemd[1]: Created slice system-getty.slice. Aug 13 00:01:36.245411 systemd[1]: Created slice system-modprobe.slice. Aug 13 00:01:36.245422 systemd[1]: Created slice system-serial\x2dgetty.slice. Aug 13 00:01:36.245432 systemd[1]: Created slice system-system\x2dcloudinit.slice. Aug 13 00:01:36.245441 systemd[1]: Created slice system-systemd\x2dfsck.slice. Aug 13 00:01:36.245451 systemd[1]: Created slice user.slice. Aug 13 00:01:36.245460 systemd[1]: Started systemd-ask-password-console.path. Aug 13 00:01:36.245469 systemd[1]: Started systemd-ask-password-wall.path. Aug 13 00:01:36.245478 systemd[1]: Set up automount boot.automount. Aug 13 00:01:36.245488 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Aug 13 00:01:36.245498 systemd[1]: Reached target integritysetup.target. Aug 13 00:01:36.245509 systemd[1]: Reached target remote-cryptsetup.target. Aug 13 00:01:36.245518 systemd[1]: Reached target remote-fs.target. Aug 13 00:01:36.245527 systemd[1]: Reached target slices.target. Aug 13 00:01:36.245537 systemd[1]: Reached target swap.target. Aug 13 00:01:36.245546 systemd[1]: Reached target torcx.target. Aug 13 00:01:36.245555 systemd[1]: Reached target veritysetup.target. Aug 13 00:01:36.245564 systemd[1]: Listening on systemd-coredump.socket. Aug 13 00:01:36.245574 systemd[1]: Listening on systemd-initctl.socket. Aug 13 00:01:36.245585 kernel: audit: type=1400 audit(1755043295.776:89): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Aug 13 00:01:36.245594 systemd[1]: Listening on systemd-journald-audit.socket. Aug 13 00:01:36.245604 kernel: audit: type=1335 audit(1755043295.776:90): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Aug 13 00:01:36.245613 systemd[1]: Listening on systemd-journald-dev-log.socket. Aug 13 00:01:36.245623 systemd[1]: Listening on systemd-journald.socket. Aug 13 00:01:36.245632 systemd[1]: Listening on systemd-networkd.socket. Aug 13 00:01:36.245642 systemd[1]: Listening on systemd-udevd-control.socket. Aug 13 00:01:36.245653 systemd[1]: Listening on systemd-udevd-kernel.socket. Aug 13 00:01:36.245663 systemd[1]: Listening on systemd-userdbd.socket. Aug 13 00:01:36.245672 systemd[1]: Mounting dev-hugepages.mount... Aug 13 00:01:36.245723 systemd[1]: Mounting dev-mqueue.mount... Aug 13 00:01:36.245734 systemd[1]: Mounting media.mount... Aug 13 00:01:36.245744 systemd[1]: Mounting sys-kernel-debug.mount... Aug 13 00:01:36.245755 systemd[1]: Mounting sys-kernel-tracing.mount... Aug 13 00:01:36.245764 systemd[1]: Mounting tmp.mount... Aug 13 00:01:36.245774 systemd[1]: Starting flatcar-tmpfiles.service... Aug 13 00:01:36.245783 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 00:01:36.245793 systemd[1]: Starting kmod-static-nodes.service... Aug 13 00:01:36.245802 systemd[1]: Starting modprobe@configfs.service... Aug 13 00:01:36.245812 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 00:01:36.245822 systemd[1]: Starting modprobe@drm.service... Aug 13 00:01:36.245831 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 00:01:36.245842 systemd[1]: Starting modprobe@fuse.service... Aug 13 00:01:36.245852 systemd[1]: Starting modprobe@loop.service... Aug 13 00:01:36.245863 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 00:01:36.245873 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Aug 13 00:01:36.245882 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Aug 13 00:01:36.245892 systemd[1]: Starting systemd-journald.service... Aug 13 00:01:36.245901 kernel: loop: module loaded Aug 13 00:01:36.245910 systemd[1]: Starting systemd-modules-load.service... Aug 13 00:01:36.245919 systemd[1]: Starting systemd-network-generator.service... Aug 13 00:01:36.245930 kernel: fuse: init (API version 7.34) Aug 13 00:01:36.245939 systemd[1]: Starting systemd-remount-fs.service... Aug 13 00:01:36.245949 systemd[1]: Starting systemd-udev-trigger.service... Aug 13 00:01:36.245958 systemd[1]: Mounted dev-hugepages.mount. Aug 13 00:01:36.245968 systemd[1]: Mounted dev-mqueue.mount. Aug 13 00:01:36.245977 systemd[1]: Mounted media.mount. Aug 13 00:01:36.245986 systemd[1]: Mounted sys-kernel-debug.mount. Aug 13 00:01:36.245996 kernel: audit: type=1305 audit(1755043296.242:91): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Aug 13 00:01:36.246010 systemd-journald[1224]: Journal started Aug 13 00:01:36.246057 systemd-journald[1224]: Runtime Journal (/run/log/journal/893d7bbc56ff486999716a20f8e4648c) is 8.0M, max 78.5M, 70.5M free. Aug 13 00:01:35.776000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Aug 13 00:01:36.242000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Aug 13 00:01:36.263192 systemd[1]: Started systemd-journald.service. Aug 13 00:01:36.263253 kernel: audit: type=1300 audit(1755043296.242:91): arch=c00000b7 syscall=211 success=yes exit=60 a0=3 a1=ffffec274270 a2=4000 a3=1 items=0 ppid=1 pid=1224 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:01:36.242000 audit[1224]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=3 a1=ffffec274270 a2=4000 a3=1 items=0 ppid=1 pid=1224 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:01:36.242000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Aug 13 00:01:36.313043 kernel: audit: type=1327 audit(1755043296.242:91): proctitle="/usr/lib/systemd/systemd-journald" Aug 13 00:01:36.315000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:36.316783 systemd[1]: Mounted sys-kernel-tracing.mount. Aug 13 00:01:36.341814 kernel: audit: type=1130 audit(1755043296.315:92): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:36.342315 systemd[1]: Mounted tmp.mount. Aug 13 00:01:36.346739 systemd[1]: Finished flatcar-tmpfiles.service. Aug 13 00:01:36.352997 systemd[1]: Finished kmod-static-nodes.service. Aug 13 00:01:36.351000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:36.380961 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 00:01:36.381238 systemd[1]: Finished modprobe@configfs.service. Aug 13 00:01:36.355000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:36.405847 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:01:36.406095 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 00:01:36.406349 kernel: audit: type=1130 audit(1755043296.351:93): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:36.406388 kernel: audit: type=1130 audit(1755043296.355:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:36.406401 kernel: audit: type=1130 audit(1755043296.383:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:36.406413 kernel: audit: type=1131 audit(1755043296.383:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:36.383000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:36.383000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:36.466000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:36.466000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:36.467658 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 00:01:36.467935 systemd[1]: Finished modprobe@drm.service. Aug 13 00:01:36.473000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:36.473000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:36.474261 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:01:36.474491 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 00:01:36.480000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:36.480000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:36.481441 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 00:01:36.481657 systemd[1]: Finished modprobe@fuse.service. Aug 13 00:01:36.486000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:36.486000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:36.487568 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:01:36.487879 systemd[1]: Finished modprobe@loop.service. Aug 13 00:01:36.493000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:36.493000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:36.494256 systemd[1]: Finished systemd-network-generator.service. Aug 13 00:01:36.499000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:36.500998 systemd[1]: Finished systemd-remount-fs.service. Aug 13 00:01:36.506000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:36.507348 systemd[1]: Reached target network-pre.target. Aug 13 00:01:36.514605 systemd[1]: Mounting sys-fs-fuse-connections.mount... Aug 13 00:01:36.521269 systemd[1]: Mounting sys-kernel-config.mount... Aug 13 00:01:36.529266 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 00:01:36.697044 systemd[1]: Starting systemd-hwdb-update.service... Aug 13 00:01:36.702639 systemd[1]: Starting systemd-journal-flush.service... Aug 13 00:01:36.707222 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:01:36.708453 systemd[1]: Starting systemd-random-seed.service... Aug 13 00:01:36.713099 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 00:01:36.714396 systemd[1]: Starting systemd-sysusers.service... Aug 13 00:01:36.721989 systemd[1]: Finished systemd-modules-load.service. Aug 13 00:01:36.726000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:36.727988 systemd[1]: Finished systemd-udev-trigger.service. Aug 13 00:01:36.732000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:36.733929 systemd[1]: Mounted sys-fs-fuse-connections.mount. Aug 13 00:01:36.738870 systemd[1]: Mounted sys-kernel-config.mount. Aug 13 00:01:36.744983 systemd[1]: Starting systemd-sysctl.service... Aug 13 00:01:36.750754 systemd[1]: Starting systemd-udev-settle.service... Aug 13 00:01:36.763913 udevadm[1265]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Aug 13 00:01:36.797244 systemd-journald[1224]: Time spent on flushing to /var/log/journal/893d7bbc56ff486999716a20f8e4648c is 13.844ms for 1035 entries. Aug 13 00:01:36.797244 systemd-journald[1224]: System Journal (/var/log/journal/893d7bbc56ff486999716a20f8e4648c) is 8.0M, max 2.6G, 2.6G free. Aug 13 00:01:36.885932 systemd-journald[1224]: Received client request to flush runtime journal. Aug 13 00:01:36.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:36.805973 systemd[1]: Finished systemd-random-seed.service. Aug 13 00:01:36.811292 systemd[1]: Reached target first-boot-complete.target. Aug 13 00:01:36.887073 systemd[1]: Finished systemd-journal-flush.service. Aug 13 00:01:36.892000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:36.949764 systemd[1]: Finished systemd-sysctl.service. Aug 13 00:01:36.954000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:37.592582 systemd[1]: Finished systemd-sysusers.service. Aug 13 00:01:37.596000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:37.598893 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Aug 13 00:01:38.245317 systemd[1]: Finished systemd-hwdb-update.service. Aug 13 00:01:38.250000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:38.397710 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Aug 13 00:01:38.402000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:38.404878 systemd[1]: Starting systemd-udevd.service... Aug 13 00:01:38.423461 systemd-udevd[1276]: Using default interface naming scheme 'v252'. Aug 13 00:01:39.451954 systemd[1]: Started systemd-udevd.service. Aug 13 00:01:39.460000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:39.463234 systemd[1]: Starting systemd-networkd.service... Aug 13 00:01:39.496061 systemd[1]: Found device dev-ttyAMA0.device. Aug 13 00:01:39.540875 systemd[1]: Starting systemd-userdbd.service... Aug 13 00:01:39.581000 audit[1279]: AVC avc: denied { confidentiality } for pid=1279 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Aug 13 00:01:39.589700 kernel: mousedev: PS/2 mouse device common for all mice Aug 13 00:01:39.589802 kernel: hv_vmbus: registering driver hv_balloon Aug 13 00:01:39.590710 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Aug 13 00:01:39.602187 kernel: hv_balloon: Memory hot add disabled on ARM64 Aug 13 00:01:39.581000 audit[1279]: SYSCALL arch=c00000b7 syscall=105 success=yes exit=0 a0=aaaaf0a36c90 a1=aa2c a2=ffff9d6724b0 a3=aaaaf0997010 items=12 ppid=1276 pid=1279 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:01:39.581000 audit: CWD cwd="/" Aug 13 00:01:39.581000 audit: PATH item=0 name=(null) inode=7247 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:01:39.581000 audit: PATH item=1 name=(null) inode=11583 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:01:39.581000 audit: PATH item=2 name=(null) inode=11583 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:01:39.581000 audit: PATH item=3 name=(null) inode=11584 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:01:39.581000 audit: PATH item=4 name=(null) inode=11583 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:01:39.581000 audit: PATH item=5 name=(null) inode=11585 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:01:39.581000 audit: PATH item=6 name=(null) inode=11583 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:01:39.581000 audit: PATH item=7 name=(null) inode=11586 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:01:39.581000 audit: PATH item=8 name=(null) inode=11583 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:01:39.581000 audit: PATH item=9 name=(null) inode=11587 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:01:39.581000 audit: PATH item=10 name=(null) inode=11583 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:01:39.581000 audit: PATH item=11 name=(null) inode=11588 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:01:39.581000 audit: PROCTITLE proctitle="(udev-worker)" Aug 13 00:01:39.623611 kernel: hv_vmbus: registering driver hyperv_fb Aug 13 00:01:39.647131 kernel: hv_utils: Registering HyperV Utility Driver Aug 13 00:01:39.647191 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Aug 13 00:01:39.647232 kernel: hv_vmbus: registering driver hv_utils Aug 13 00:01:39.647259 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Aug 13 00:01:39.647277 kernel: hv_utils: Heartbeat IC version 3.0 Aug 13 00:01:39.629027 systemd[1]: Started systemd-userdbd.service. Aug 13 00:01:39.646000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:39.655792 kernel: Console: switching to colour dummy device 80x25 Aug 13 00:01:39.655890 kernel: hv_utils: Shutdown IC version 3.2 Aug 13 00:01:39.662949 kernel: hv_utils: TimeSync IC version 4.0 Aug 13 00:01:39.295804 kernel: Console: switching to colour frame buffer device 128x48 Aug 13 00:01:39.365712 systemd-journald[1224]: Time jumped backwards, rotating. Aug 13 00:01:39.564846 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Aug 13 00:01:39.576909 systemd[1]: Finished systemd-udev-settle.service. Aug 13 00:01:39.581000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:39.583784 systemd[1]: Starting lvm2-activation-early.service... Aug 13 00:01:39.929432 lvm[1355]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 00:01:39.990100 systemd[1]: Finished lvm2-activation-early.service. Aug 13 00:01:39.996000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:39.997525 systemd[1]: Reached target cryptsetup.target. Aug 13 00:01:40.003797 systemd[1]: Starting lvm2-activation.service... Aug 13 00:01:40.008774 lvm[1357]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 00:01:40.023360 systemd-networkd[1297]: lo: Link UP Aug 13 00:01:40.023371 systemd-networkd[1297]: lo: Gained carrier Aug 13 00:01:40.023776 systemd-networkd[1297]: Enumeration completed Aug 13 00:01:40.023946 systemd[1]: Started systemd-networkd.service. Aug 13 00:01:40.028000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:40.030481 systemd[1]: Starting systemd-networkd-wait-online.service... Aug 13 00:01:40.036617 systemd[1]: Finished lvm2-activation.service. Aug 13 00:01:40.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:40.041714 systemd[1]: Reached target local-fs-pre.target. Aug 13 00:01:40.047905 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 00:01:40.048038 systemd[1]: Reached target local-fs.target. Aug 13 00:01:40.052991 systemd[1]: Reached target machines.target. Aug 13 00:01:40.059001 systemd-networkd[1297]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:01:40.059974 systemd[1]: Starting ldconfig.service... Aug 13 00:01:40.101364 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 00:01:40.101751 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:01:40.103097 systemd[1]: Starting systemd-boot-update.service... Aug 13 00:01:40.113900 kernel: mlx5_core aaa5:00:02.0 enP43685s1: Link up Aug 13 00:01:40.115032 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Aug 13 00:01:40.122794 systemd[1]: Starting systemd-machine-id-commit.service... Aug 13 00:01:40.129029 systemd[1]: Starting systemd-sysext.service... Aug 13 00:01:40.142910 kernel: hv_netvsc 000d3a6e-6005-000d-3a6e-6005000d3a6e eth0: Data path switched to VF: enP43685s1 Aug 13 00:01:40.144569 systemd-networkd[1297]: enP43685s1: Link UP Aug 13 00:01:40.145027 systemd-networkd[1297]: eth0: Link UP Aug 13 00:01:40.145089 systemd-networkd[1297]: eth0: Gained carrier Aug 13 00:01:40.150488 systemd-networkd[1297]: enP43685s1: Gained carrier Aug 13 00:01:40.154022 systemd-networkd[1297]: eth0: DHCPv4 address 10.200.20.38/24, gateway 10.200.20.1 acquired from 168.63.129.16 Aug 13 00:01:40.180478 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1361 (bootctl) Aug 13 00:01:40.181816 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Aug 13 00:01:40.224670 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 00:01:40.225357 systemd[1]: Finished systemd-machine-id-commit.service. Aug 13 00:01:40.233000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:40.236384 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Aug 13 00:01:40.242000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:40.245432 systemd[1]: Unmounting usr-share-oem.mount... Aug 13 00:01:40.254108 systemd[1]: usr-share-oem.mount: Deactivated successfully. Aug 13 00:01:40.254400 systemd[1]: Unmounted usr-share-oem.mount. Aug 13 00:01:40.318903 kernel: loop0: detected capacity change from 0 to 203944 Aug 13 00:01:40.389145 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 00:01:40.414897 kernel: loop1: detected capacity change from 0 to 203944 Aug 13 00:01:40.432275 (sd-sysext)[1377]: Using extensions 'kubernetes'. Aug 13 00:01:40.432648 (sd-sysext)[1377]: Merged extensions into '/usr'. Aug 13 00:01:40.457894 systemd[1]: Mounting usr-share-oem.mount... Aug 13 00:01:40.462500 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 00:01:40.464152 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 00:01:40.471212 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 00:01:40.480658 systemd[1]: Starting modprobe@loop.service... Aug 13 00:01:40.484723 systemd-fsck[1370]: fsck.fat 4.2 (2021-01-31) Aug 13 00:01:40.484723 systemd-fsck[1370]: /dev/sda1: 236 files, 117307/258078 clusters Aug 13 00:01:40.490968 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 00:01:40.491132 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:01:40.496179 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Aug 13 00:01:40.506000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:40.507855 systemd[1]: Mounted usr-share-oem.mount. Aug 13 00:01:40.512205 kernel: kauditd_printk_skb: 44 callbacks suppressed Aug 13 00:01:40.512267 kernel: audit: type=1130 audit(1755043300.506:126): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:40.537665 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:01:40.538004 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 00:01:40.542000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:40.543619 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:01:40.543907 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 00:01:40.542000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:40.585818 kernel: audit: type=1130 audit(1755043300.542:127): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:40.585948 kernel: audit: type=1131 audit(1755043300.542:128): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:40.585000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:40.586832 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:01:40.587147 systemd[1]: Finished modprobe@loop.service. Aug 13 00:01:40.585000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:40.628292 kernel: audit: type=1130 audit(1755043300.585:129): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:40.628344 kernel: audit: type=1131 audit(1755043300.585:130): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:40.628000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:40.630663 systemd[1]: Finished systemd-sysext.service. Aug 13 00:01:40.628000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:40.670928 kernel: audit: type=1130 audit(1755043300.628:131): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:40.671052 kernel: audit: type=1131 audit(1755043300.628:132): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:40.674000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:40.679711 systemd[1]: Mounting boot.mount... Aug 13 00:01:40.698883 kernel: audit: type=1130 audit(1755043300.674:133): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:40.700944 systemd[1]: Starting ensure-sysext.service... Aug 13 00:01:40.706288 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:01:40.706514 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 00:01:40.708119 systemd[1]: Starting systemd-tmpfiles-setup.service... Aug 13 00:01:40.718407 systemd[1]: Reloading. Aug 13 00:01:40.733306 systemd-tmpfiles[1397]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Aug 13 00:01:40.756946 /usr/lib/systemd/system-generators/torcx-generator[1420]: time="2025-08-13T00:01:40Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Aug 13 00:01:40.756974 /usr/lib/systemd/system-generators/torcx-generator[1420]: time="2025-08-13T00:01:40Z" level=info msg="torcx already run" Aug 13 00:01:40.777266 systemd-tmpfiles[1397]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 00:01:40.791338 systemd-tmpfiles[1397]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 00:01:40.846969 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 00:01:40.846990 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 00:01:40.862814 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:01:40.943905 systemd[1]: Mounted boot.mount. Aug 13 00:01:40.958651 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 00:01:40.961454 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 00:01:40.968010 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 00:01:40.977319 systemd[1]: Starting modprobe@loop.service... Aug 13 00:01:40.981559 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 00:01:40.981701 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:01:40.982725 systemd[1]: Finished systemd-boot-update.service. Aug 13 00:01:40.987000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:40.988697 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:01:40.988866 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 00:01:41.014069 kernel: audit: type=1130 audit(1755043300.987:134): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:41.013000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:41.015132 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:01:41.015442 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 00:01:41.038578 kernel: audit: type=1130 audit(1755043301.013:135): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:41.013000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:41.038000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:41.038000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:41.039931 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:01:41.040274 systemd[1]: Finished modprobe@loop.service. Aug 13 00:01:41.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:41.044000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:41.047148 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 00:01:41.048661 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 00:01:41.056545 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 00:01:41.063326 systemd[1]: Starting modprobe@loop.service... Aug 13 00:01:41.068632 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 00:01:41.069001 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:01:41.069972 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:01:41.070249 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 00:01:41.074000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:41.074000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:41.075842 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:01:41.076165 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 00:01:41.081000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:41.081000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:41.082661 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:01:41.082979 systemd[1]: Finished modprobe@loop.service. Aug 13 00:01:41.088000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:41.088000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:41.091171 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 00:01:41.092951 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 00:01:41.099075 systemd[1]: Starting modprobe@drm.service... Aug 13 00:01:41.106529 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 00:01:41.113224 systemd[1]: Starting modprobe@loop.service... Aug 13 00:01:41.118481 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 00:01:41.118734 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:01:41.119839 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:01:41.120166 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 00:01:41.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:41.124000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:41.125452 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 00:01:41.125717 systemd[1]: Finished modprobe@drm.service. Aug 13 00:01:41.129000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:41.129000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:41.130827 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:01:41.131206 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 00:01:41.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:41.135000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:41.136825 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:01:41.137229 systemd[1]: Finished modprobe@loop.service. Aug 13 00:01:41.141000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:41.141000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:41.143110 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:01:41.143278 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 00:01:41.144538 systemd[1]: Finished ensure-sysext.service. Aug 13 00:01:41.150000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:41.291031 systemd-networkd[1297]: eth0: Gained IPv6LL Aug 13 00:01:41.295944 systemd[1]: Finished systemd-networkd-wait-online.service. Aug 13 00:01:41.301000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:43.382895 systemd[1]: Finished systemd-tmpfiles-setup.service. Aug 13 00:01:43.388000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:43.390616 systemd[1]: Starting audit-rules.service... Aug 13 00:01:43.396728 systemd[1]: Starting clean-ca-certificates.service... Aug 13 00:01:43.403591 systemd[1]: Starting systemd-journal-catalog-update.service... Aug 13 00:01:43.411216 systemd[1]: Starting systemd-resolved.service... Aug 13 00:01:43.418305 systemd[1]: Starting systemd-timesyncd.service... Aug 13 00:01:43.424529 systemd[1]: Starting systemd-update-utmp.service... Aug 13 00:01:43.429736 systemd[1]: Finished clean-ca-certificates.service. Aug 13 00:01:43.434000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:43.436087 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 00:01:43.468000 audit[1519]: SYSTEM_BOOT pid=1519 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Aug 13 00:01:43.476000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:43.471523 systemd[1]: Finished systemd-update-utmp.service. Aug 13 00:01:43.561000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:43.557255 systemd[1]: Started systemd-timesyncd.service. Aug 13 00:01:43.562202 systemd[1]: Reached target time-set.target. Aug 13 00:01:43.608958 systemd-resolved[1517]: Positive Trust Anchors: Aug 13 00:01:43.609295 systemd-resolved[1517]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:01:43.609376 systemd-resolved[1517]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Aug 13 00:01:43.725340 systemd[1]: Finished systemd-journal-catalog-update.service. Aug 13 00:01:43.730000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:43.745154 systemd-resolved[1517]: Using system hostname 'ci-3510.3.8-a-af9fafecff'. Aug 13 00:01:43.746725 systemd[1]: Started systemd-resolved.service. Aug 13 00:01:43.750000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:01:43.751524 systemd[1]: Reached target network.target. Aug 13 00:01:43.756024 systemd[1]: Reached target network-online.target. Aug 13 00:01:43.760744 systemd[1]: Reached target nss-lookup.target. Aug 13 00:01:43.834000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Aug 13 00:01:43.834000 audit[1535]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffd1d18cb0 a2=420 a3=0 items=0 ppid=1512 pid=1535 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:01:43.834000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Aug 13 00:01:43.835904 augenrules[1535]: No rules Aug 13 00:01:43.836911 systemd[1]: Finished audit-rules.service. Aug 13 00:01:44.062292 systemd-timesyncd[1518]: Contacted time server 162.244.81.139:123 (0.flatcar.pool.ntp.org). Aug 13 00:01:44.062368 systemd-timesyncd[1518]: Initial clock synchronization to Wed 2025-08-13 00:01:44.063242 UTC. Aug 13 00:01:50.059803 ldconfig[1360]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 00:01:50.069232 systemd[1]: Finished ldconfig.service. Aug 13 00:01:50.075801 systemd[1]: Starting systemd-update-done.service... Aug 13 00:01:50.130192 systemd[1]: Finished systemd-update-done.service. Aug 13 00:01:50.135934 systemd[1]: Reached target sysinit.target. Aug 13 00:01:50.140414 systemd[1]: Started motdgen.path. Aug 13 00:01:50.144277 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Aug 13 00:01:50.150939 systemd[1]: Started logrotate.timer. Aug 13 00:01:50.155202 systemd[1]: Started mdadm.timer. Aug 13 00:01:50.159128 systemd[1]: Started systemd-tmpfiles-clean.timer. Aug 13 00:01:50.164130 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 00:01:50.164173 systemd[1]: Reached target paths.target. Aug 13 00:01:50.169033 systemd[1]: Reached target timers.target. Aug 13 00:01:50.173981 systemd[1]: Listening on dbus.socket. Aug 13 00:01:50.179101 systemd[1]: Starting docker.socket... Aug 13 00:01:50.208731 systemd[1]: Listening on sshd.socket. Aug 13 00:01:50.213140 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:01:50.213587 systemd[1]: Listening on docker.socket. Aug 13 00:01:50.218516 systemd[1]: Reached target sockets.target. Aug 13 00:01:50.223201 systemd[1]: Reached target basic.target. Aug 13 00:01:50.227837 systemd[1]: System is tainted: cgroupsv1 Aug 13 00:01:50.227910 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Aug 13 00:01:50.227935 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Aug 13 00:01:50.229090 systemd[1]: Starting containerd.service... Aug 13 00:01:50.234252 systemd[1]: Starting dbus.service... Aug 13 00:01:50.238588 systemd[1]: Starting enable-oem-cloudinit.service... Aug 13 00:01:50.244243 systemd[1]: Starting extend-filesystems.service... Aug 13 00:01:50.249405 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Aug 13 00:01:50.263082 systemd[1]: Starting kubelet.service... Aug 13 00:01:50.268107 systemd[1]: Starting motdgen.service... Aug 13 00:01:50.272981 systemd[1]: Started nvidia.service. Aug 13 00:01:50.278615 systemd[1]: Starting prepare-helm.service... Aug 13 00:01:50.283756 systemd[1]: Starting ssh-key-proc-cmdline.service... Aug 13 00:01:50.289748 systemd[1]: Starting sshd-keygen.service... Aug 13 00:01:50.295866 systemd[1]: Starting systemd-logind.service... Aug 13 00:01:50.300212 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:01:50.300300 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 13 00:01:50.302299 systemd[1]: Starting update-engine.service... Aug 13 00:01:50.307835 systemd[1]: Starting update-ssh-keys-after-ignition.service... Aug 13 00:01:50.318099 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 00:01:50.318371 systemd[1]: Finished ssh-key-proc-cmdline.service. Aug 13 00:01:50.327659 jq[1566]: true Aug 13 00:01:50.327934 jq[1550]: false Aug 13 00:01:50.339570 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 00:01:50.339816 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Aug 13 00:01:50.352472 extend-filesystems[1551]: Found loop1 Aug 13 00:01:50.352472 extend-filesystems[1551]: Found sda Aug 13 00:01:50.352472 extend-filesystems[1551]: Found sda1 Aug 13 00:01:50.352472 extend-filesystems[1551]: Found sda2 Aug 13 00:01:50.352472 extend-filesystems[1551]: Found sda3 Aug 13 00:01:50.352472 extend-filesystems[1551]: Found usr Aug 13 00:01:50.352472 extend-filesystems[1551]: Found sda4 Aug 13 00:01:50.352472 extend-filesystems[1551]: Found sda6 Aug 13 00:01:50.352472 extend-filesystems[1551]: Found sda7 Aug 13 00:01:50.352472 extend-filesystems[1551]: Found sda9 Aug 13 00:01:50.352472 extend-filesystems[1551]: Checking size of /dev/sda9 Aug 13 00:01:50.430684 jq[1578]: true Aug 13 00:01:50.410126 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 00:01:50.410371 systemd[1]: Finished motdgen.service. Aug 13 00:01:50.478587 env[1581]: time="2025-08-13T00:01:50.478541924Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Aug 13 00:01:50.492646 systemd-logind[1562]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Aug 13 00:01:50.494509 tar[1573]: linux-arm64/helm Aug 13 00:01:50.495530 systemd-logind[1562]: New seat seat0. Aug 13 00:01:50.508581 extend-filesystems[1551]: Old size kept for /dev/sda9 Aug 13 00:01:50.525039 extend-filesystems[1551]: Found sr0 Aug 13 00:01:50.513460 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 00:01:50.513710 systemd[1]: Finished extend-filesystems.service. Aug 13 00:01:50.591552 bash[1611]: Updated "/home/core/.ssh/authorized_keys" Aug 13 00:01:50.592501 systemd[1]: Finished update-ssh-keys-after-ignition.service. Aug 13 00:01:50.617953 env[1581]: time="2025-08-13T00:01:50.617914271Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 13 00:01:50.618173 env[1581]: time="2025-08-13T00:01:50.618157002Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:01:50.619255 env[1581]: time="2025-08-13T00:01:50.619225655Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.189-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:01:50.619344 env[1581]: time="2025-08-13T00:01:50.619329260Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:01:50.619633 env[1581]: time="2025-08-13T00:01:50.619611914Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:01:50.619701 env[1581]: time="2025-08-13T00:01:50.619687717Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 13 00:01:50.619756 env[1581]: time="2025-08-13T00:01:50.619742040Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Aug 13 00:01:50.619809 env[1581]: time="2025-08-13T00:01:50.619797163Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 13 00:01:50.619958 env[1581]: time="2025-08-13T00:01:50.619942210Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:01:50.620235 env[1581]: time="2025-08-13T00:01:50.620218063Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:01:50.620449 env[1581]: time="2025-08-13T00:01:50.620429754Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:01:50.620510 env[1581]: time="2025-08-13T00:01:50.620497517Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 13 00:01:50.620615 env[1581]: time="2025-08-13T00:01:50.620599762Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Aug 13 00:01:50.621089 env[1581]: time="2025-08-13T00:01:50.621070265Z" level=info msg="metadata content store policy set" policy=shared Aug 13 00:01:50.636587 env[1581]: time="2025-08-13T00:01:50.635423968Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 13 00:01:50.636587 env[1581]: time="2025-08-13T00:01:50.635456170Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 13 00:01:50.636587 env[1581]: time="2025-08-13T00:01:50.635470090Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 13 00:01:50.636587 env[1581]: time="2025-08-13T00:01:50.635530693Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 13 00:01:50.636587 env[1581]: time="2025-08-13T00:01:50.635548414Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 13 00:01:50.636587 env[1581]: time="2025-08-13T00:01:50.635607857Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 13 00:01:50.636587 env[1581]: time="2025-08-13T00:01:50.635624898Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 13 00:01:50.636587 env[1581]: time="2025-08-13T00:01:50.635997676Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 13 00:01:50.636587 env[1581]: time="2025-08-13T00:01:50.636015677Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Aug 13 00:01:50.636587 env[1581]: time="2025-08-13T00:01:50.636036438Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 13 00:01:50.636587 env[1581]: time="2025-08-13T00:01:50.636048999Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 13 00:01:50.636587 env[1581]: time="2025-08-13T00:01:50.636063560Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 13 00:01:50.636587 env[1581]: time="2025-08-13T00:01:50.636188286Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 13 00:01:50.636587 env[1581]: time="2025-08-13T00:01:50.636290571Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 13 00:01:50.637134 env[1581]: time="2025-08-13T00:01:50.637033367Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 13 00:01:50.637134 env[1581]: time="2025-08-13T00:01:50.637065409Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 13 00:01:50.637134 env[1581]: time="2025-08-13T00:01:50.637079689Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 13 00:01:50.637320 env[1581]: time="2025-08-13T00:01:50.637294900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 13 00:01:50.637389 env[1581]: time="2025-08-13T00:01:50.637375424Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 13 00:01:50.637454 env[1581]: time="2025-08-13T00:01:50.637441507Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 13 00:01:50.637535 env[1581]: time="2025-08-13T00:01:50.637510110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 13 00:01:50.637597 env[1581]: time="2025-08-13T00:01:50.637578114Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 13 00:01:50.637681 env[1581]: time="2025-08-13T00:01:50.637654237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 13 00:01:50.637760 env[1581]: time="2025-08-13T00:01:50.637746882Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 13 00:01:50.637832 env[1581]: time="2025-08-13T00:01:50.637819446Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 13 00:01:50.637921 env[1581]: time="2025-08-13T00:01:50.637908170Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 13 00:01:50.638122 env[1581]: time="2025-08-13T00:01:50.638107220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 13 00:01:50.638211 env[1581]: time="2025-08-13T00:01:50.638198104Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 13 00:01:50.638282 env[1581]: time="2025-08-13T00:01:50.638269748Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 13 00:01:50.638355 env[1581]: time="2025-08-13T00:01:50.638330231Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 13 00:01:50.638413 env[1581]: time="2025-08-13T00:01:50.638399194Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Aug 13 00:01:50.638474 env[1581]: time="2025-08-13T00:01:50.638461917Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 13 00:01:50.638544 env[1581]: time="2025-08-13T00:01:50.638519400Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Aug 13 00:01:50.638638 env[1581]: time="2025-08-13T00:01:50.638624165Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 13 00:01:50.638971 env[1581]: time="2025-08-13T00:01:50.638913179Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 13 00:01:50.654028 env[1581]: time="2025-08-13T00:01:50.639153591Z" level=info msg="Connect containerd service" Aug 13 00:01:50.654028 env[1581]: time="2025-08-13T00:01:50.639190673Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 13 00:01:50.654028 env[1581]: time="2025-08-13T00:01:50.639853385Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 00:01:50.654028 env[1581]: time="2025-08-13T00:01:50.640125398Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 00:01:50.654028 env[1581]: time="2025-08-13T00:01:50.640162560Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 00:01:50.654028 env[1581]: time="2025-08-13T00:01:50.644977716Z" level=info msg="containerd successfully booted in 0.177947s" Aug 13 00:01:50.654028 env[1581]: time="2025-08-13T00:01:50.645600587Z" level=info msg="Start subscribing containerd event" Aug 13 00:01:50.654028 env[1581]: time="2025-08-13T00:01:50.645638068Z" level=info msg="Start recovering state" Aug 13 00:01:50.654028 env[1581]: time="2025-08-13T00:01:50.645707832Z" level=info msg="Start event monitor" Aug 13 00:01:50.654028 env[1581]: time="2025-08-13T00:01:50.645725713Z" level=info msg="Start snapshots syncer" Aug 13 00:01:50.654028 env[1581]: time="2025-08-13T00:01:50.645745834Z" level=info msg="Start cni network conf syncer for default" Aug 13 00:01:50.654028 env[1581]: time="2025-08-13T00:01:50.645753634Z" level=info msg="Start streaming server" Aug 13 00:01:50.640306 systemd[1]: Started containerd.service. Aug 13 00:01:50.682254 systemd[1]: nvidia.service: Deactivated successfully. Aug 13 00:01:50.919381 dbus-daemon[1549]: [system] SELinux support is enabled Aug 13 00:01:50.925984 dbus-daemon[1549]: [system] Successfully activated service 'org.freedesktop.systemd1' Aug 13 00:01:50.919584 systemd[1]: Started dbus.service. Aug 13 00:01:50.925413 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 00:01:50.925433 systemd[1]: Reached target system-config.target. Aug 13 00:01:50.934212 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 00:01:50.934233 systemd[1]: Reached target user-config.target. Aug 13 00:01:50.940293 systemd[1]: Started systemd-logind.service. Aug 13 00:01:51.012286 update_engine[1564]: I0813 00:01:50.999954 1564 main.cc:92] Flatcar Update Engine starting Aug 13 00:01:51.068262 systemd[1]: Started update-engine.service. Aug 13 00:01:51.074395 systemd[1]: Started locksmithd.service. Aug 13 00:01:51.078894 update_engine[1564]: I0813 00:01:51.078469 1564 update_check_scheduler.cc:74] Next update check in 6m45s Aug 13 00:01:51.200276 tar[1573]: linux-arm64/LICENSE Aug 13 00:01:51.200395 tar[1573]: linux-arm64/README.md Aug 13 00:01:51.205304 systemd[1]: Finished prepare-helm.service. Aug 13 00:01:51.444500 systemd[1]: Started kubelet.service. Aug 13 00:01:51.933935 kubelet[1673]: E0813 00:01:51.933862 1673 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:01:51.935678 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:01:51.935815 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:01:52.025661 sshd_keygen[1574]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 00:01:52.043085 systemd[1]: Finished sshd-keygen.service. Aug 13 00:01:52.050730 systemd[1]: Starting issuegen.service... Aug 13 00:01:52.055830 systemd[1]: Started waagent.service. Aug 13 00:01:52.063824 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 00:01:52.064128 systemd[1]: Finished issuegen.service. Aug 13 00:01:52.069718 systemd[1]: Starting systemd-user-sessions.service... Aug 13 00:01:52.117274 systemd[1]: Finished systemd-user-sessions.service. Aug 13 00:01:52.124566 systemd[1]: Started getty@tty1.service. Aug 13 00:01:52.135214 systemd[1]: Started serial-getty@ttyAMA0.service. Aug 13 00:01:52.140556 systemd[1]: Reached target getty.target. Aug 13 00:01:52.145020 systemd[1]: Reached target multi-user.target. Aug 13 00:01:52.151779 systemd[1]: Starting systemd-update-utmp-runlevel.service... Aug 13 00:01:52.166489 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Aug 13 00:01:52.166743 systemd[1]: Finished systemd-update-utmp-runlevel.service. Aug 13 00:01:52.174783 systemd[1]: Startup finished in 18.257s (kernel) + 41.425s (userspace) = 59.682s. Aug 13 00:01:52.404551 locksmithd[1665]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 00:01:52.997166 login[1701]: pam_lastlog(login:session): file /var/log/lastlog is locked/write Aug 13 00:01:53.023004 login[1702]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Aug 13 00:01:53.160696 systemd[1]: Created slice user-500.slice. Aug 13 00:01:53.161716 systemd[1]: Starting user-runtime-dir@500.service... Aug 13 00:01:53.163929 systemd-logind[1562]: New session 2 of user core. Aug 13 00:01:53.210110 systemd[1]: Finished user-runtime-dir@500.service. Aug 13 00:01:53.211471 systemd[1]: Starting user@500.service... Aug 13 00:01:53.272180 (systemd)[1708]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:01:53.673378 systemd[1708]: Queued start job for default target default.target. Aug 13 00:01:53.673631 systemd[1708]: Reached target paths.target. Aug 13 00:01:53.673646 systemd[1708]: Reached target sockets.target. Aug 13 00:01:53.673657 systemd[1708]: Reached target timers.target. Aug 13 00:01:53.673667 systemd[1708]: Reached target basic.target. Aug 13 00:01:53.673786 systemd[1]: Started user@500.service. Aug 13 00:01:53.674667 systemd[1]: Started session-2.scope. Aug 13 00:01:53.674927 systemd[1708]: Reached target default.target. Aug 13 00:01:53.675112 systemd[1708]: Startup finished in 396ms. Aug 13 00:01:53.997522 login[1701]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Aug 13 00:01:54.002113 systemd[1]: Started session-1.scope. Aug 13 00:01:54.002559 systemd-logind[1562]: New session 1 of user core. Aug 13 00:02:00.174217 waagent[1696]: 2025-08-13T00:02:00.174106Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Aug 13 00:02:00.196691 waagent[1696]: 2025-08-13T00:02:00.196592Z INFO Daemon Daemon OS: flatcar 3510.3.8 Aug 13 00:02:00.201531 waagent[1696]: 2025-08-13T00:02:00.201454Z INFO Daemon Daemon Python: 3.9.16 Aug 13 00:02:00.206370 waagent[1696]: 2025-08-13T00:02:00.206254Z INFO Daemon Daemon Run daemon Aug 13 00:02:00.211008 waagent[1696]: 2025-08-13T00:02:00.210935Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.8' Aug 13 00:02:00.242077 waagent[1696]: 2025-08-13T00:02:00.241923Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Aug 13 00:02:00.261420 waagent[1696]: 2025-08-13T00:02:00.261267Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Aug 13 00:02:00.272287 waagent[1696]: 2025-08-13T00:02:00.272190Z INFO Daemon Daemon cloud-init is enabled: False Aug 13 00:02:00.278161 waagent[1696]: 2025-08-13T00:02:00.278065Z INFO Daemon Daemon Using waagent for provisioning Aug 13 00:02:00.284777 waagent[1696]: 2025-08-13T00:02:00.284692Z INFO Daemon Daemon Activate resource disk Aug 13 00:02:00.290257 waagent[1696]: 2025-08-13T00:02:00.290166Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Aug 13 00:02:00.305634 waagent[1696]: 2025-08-13T00:02:00.305537Z INFO Daemon Daemon Found device: None Aug 13 00:02:00.311407 waagent[1696]: 2025-08-13T00:02:00.311309Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Aug 13 00:02:00.322248 waagent[1696]: 2025-08-13T00:02:00.322152Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Aug 13 00:02:00.336011 waagent[1696]: 2025-08-13T00:02:00.335933Z INFO Daemon Daemon Clean protocol and wireserver endpoint Aug 13 00:02:00.342758 waagent[1696]: 2025-08-13T00:02:00.342670Z INFO Daemon Daemon Running default provisioning handler Aug 13 00:02:00.357892 waagent[1696]: 2025-08-13T00:02:00.357717Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Aug 13 00:02:00.374636 waagent[1696]: 2025-08-13T00:02:00.374485Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Aug 13 00:02:00.385243 waagent[1696]: 2025-08-13T00:02:00.385143Z INFO Daemon Daemon cloud-init is enabled: False Aug 13 00:02:00.390917 waagent[1696]: 2025-08-13T00:02:00.390794Z INFO Daemon Daemon Copying ovf-env.xml Aug 13 00:02:00.534565 waagent[1696]: 2025-08-13T00:02:00.534339Z INFO Daemon Daemon Successfully mounted dvd Aug 13 00:02:00.639345 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Aug 13 00:02:00.685291 waagent[1696]: 2025-08-13T00:02:00.685140Z INFO Daemon Daemon Detect protocol endpoint Aug 13 00:02:00.690922 waagent[1696]: 2025-08-13T00:02:00.690809Z INFO Daemon Daemon Clean protocol and wireserver endpoint Aug 13 00:02:00.698278 waagent[1696]: 2025-08-13T00:02:00.698181Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Aug 13 00:02:00.705321 waagent[1696]: 2025-08-13T00:02:00.705225Z INFO Daemon Daemon Test for route to 168.63.129.16 Aug 13 00:02:00.711607 waagent[1696]: 2025-08-13T00:02:00.711517Z INFO Daemon Daemon Route to 168.63.129.16 exists Aug 13 00:02:00.717806 waagent[1696]: 2025-08-13T00:02:00.717715Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Aug 13 00:02:00.865778 waagent[1696]: 2025-08-13T00:02:00.865708Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Aug 13 00:02:00.873321 waagent[1696]: 2025-08-13T00:02:00.873278Z INFO Daemon Daemon Wire protocol version:2012-11-30 Aug 13 00:02:00.878910 waagent[1696]: 2025-08-13T00:02:00.878823Z INFO Daemon Daemon Server preferred version:2015-04-05 Aug 13 00:02:01.454083 waagent[1696]: 2025-08-13T00:02:01.453898Z INFO Daemon Daemon Initializing goal state during protocol detection Aug 13 00:02:01.471082 waagent[1696]: 2025-08-13T00:02:01.470971Z INFO Daemon Daemon Forcing an update of the goal state.. Aug 13 00:02:01.477587 waagent[1696]: 2025-08-13T00:02:01.477494Z INFO Daemon Daemon Fetching goal state [incarnation 1] Aug 13 00:02:01.574296 waagent[1696]: 2025-08-13T00:02:01.574117Z INFO Daemon Daemon Found private key matching thumbprint 542A17A50D5069BEEAC63305D1CE103574612148 Aug 13 00:02:01.584729 waagent[1696]: 2025-08-13T00:02:01.584640Z INFO Daemon Daemon Certificate with thumbprint 7F62010F92251816187A96723BAF08CF598A0E38 has no matching private key. Aug 13 00:02:01.596004 waagent[1696]: 2025-08-13T00:02:01.595917Z INFO Daemon Daemon Fetch goal state completed Aug 13 00:02:02.101429 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 13 00:02:02.101609 systemd[1]: Stopped kubelet.service. Aug 13 00:02:02.103413 systemd[1]: Starting kubelet.service... Aug 13 00:02:02.192909 waagent[1696]: 2025-08-13T00:02:02.192821Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: 993edd36-56b3-46e5-9fe0-380264a8bacc New eTag: 2880259175860311389] Aug 13 00:02:02.208121 waagent[1696]: 2025-08-13T00:02:02.208020Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Aug 13 00:02:02.226863 waagent[1696]: 2025-08-13T00:02:02.226793Z INFO Daemon Daemon Starting provisioning Aug 13 00:02:02.232825 waagent[1696]: 2025-08-13T00:02:02.232720Z INFO Daemon Daemon Handle ovf-env.xml. Aug 13 00:02:02.239019 waagent[1696]: 2025-08-13T00:02:02.238910Z INFO Daemon Daemon Set hostname [ci-3510.3.8-a-af9fafecff] Aug 13 00:02:02.569959 systemd[1]: Started kubelet.service. Aug 13 00:02:02.605803 waagent[1696]: 2025-08-13T00:02:02.604893Z INFO Daemon Daemon Publish hostname [ci-3510.3.8-a-af9fafecff] Aug 13 00:02:02.613118 waagent[1696]: 2025-08-13T00:02:02.612977Z INFO Daemon Daemon Examine /proc/net/route for primary interface Aug 13 00:02:02.620284 waagent[1696]: 2025-08-13T00:02:02.620187Z INFO Daemon Daemon Primary interface is [eth0] Aug 13 00:02:02.625694 kubelet[1761]: E0813 00:02:02.625657 1761 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:02:02.628215 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:02:02.628361 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:02:02.642403 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Aug 13 00:02:02.642634 systemd[1]: Stopped systemd-networkd-wait-online.service. Aug 13 00:02:02.642700 systemd[1]: Stopping systemd-networkd-wait-online.service... Aug 13 00:02:02.642923 systemd[1]: Stopping systemd-networkd.service... Aug 13 00:02:02.646966 systemd-networkd[1297]: eth0: DHCPv6 lease lost Aug 13 00:02:02.648728 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 00:02:02.649020 systemd[1]: Stopped systemd-networkd.service. Aug 13 00:02:02.651188 systemd[1]: Starting systemd-networkd.service... Aug 13 00:02:02.686016 systemd-networkd[1771]: enP43685s1: Link UP Aug 13 00:02:02.686024 systemd-networkd[1771]: enP43685s1: Gained carrier Aug 13 00:02:02.687039 systemd-networkd[1771]: eth0: Link UP Aug 13 00:02:02.687049 systemd-networkd[1771]: eth0: Gained carrier Aug 13 00:02:02.687393 systemd-networkd[1771]: lo: Link UP Aug 13 00:02:02.687401 systemd-networkd[1771]: lo: Gained carrier Aug 13 00:02:02.687646 systemd-networkd[1771]: eth0: Gained IPv6LL Aug 13 00:02:02.688623 systemd-networkd[1771]: Enumeration completed Aug 13 00:02:02.688828 systemd[1]: Started systemd-networkd.service. Aug 13 00:02:02.690210 systemd-networkd[1771]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:02:02.691415 systemd[1]: Starting systemd-networkd-wait-online.service... Aug 13 00:02:02.699531 waagent[1696]: 2025-08-13T00:02:02.692359Z INFO Daemon Daemon Create user account if not exists Aug 13 00:02:02.703471 waagent[1696]: 2025-08-13T00:02:02.700827Z INFO Daemon Daemon User core already exists, skip useradd Aug 13 00:02:02.710325 waagent[1696]: 2025-08-13T00:02:02.710208Z INFO Daemon Daemon Configure sudoer Aug 13 00:02:02.719973 systemd-networkd[1771]: eth0: DHCPv4 address 10.200.20.38/24, gateway 10.200.20.1 acquired from 168.63.129.16 Aug 13 00:02:02.722426 systemd[1]: Finished systemd-networkd-wait-online.service. Aug 13 00:02:02.730326 waagent[1696]: 2025-08-13T00:02:02.730226Z INFO Daemon Daemon Configure sshd Aug 13 00:02:02.735550 waagent[1696]: 2025-08-13T00:02:02.735454Z INFO Daemon Daemon Deploy ssh public key. Aug 13 00:02:03.933205 waagent[1696]: 2025-08-13T00:02:03.933113Z INFO Daemon Daemon Provisioning complete Aug 13 00:02:03.953149 waagent[1696]: 2025-08-13T00:02:03.953079Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Aug 13 00:02:03.960098 waagent[1696]: 2025-08-13T00:02:03.960001Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Aug 13 00:02:03.971749 waagent[1696]: 2025-08-13T00:02:03.971654Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Aug 13 00:02:04.291124 waagent[1781]: 2025-08-13T00:02:04.290967Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Aug 13 00:02:04.291925 waagent[1781]: 2025-08-13T00:02:04.291844Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Aug 13 00:02:04.292071 waagent[1781]: 2025-08-13T00:02:04.292021Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Aug 13 00:02:04.305114 waagent[1781]: 2025-08-13T00:02:04.305015Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Aug 13 00:02:04.305325 waagent[1781]: 2025-08-13T00:02:04.305273Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Aug 13 00:02:04.380444 waagent[1781]: 2025-08-13T00:02:04.380280Z INFO ExtHandler ExtHandler Found private key matching thumbprint 542A17A50D5069BEEAC63305D1CE103574612148 Aug 13 00:02:04.380689 waagent[1781]: 2025-08-13T00:02:04.380634Z INFO ExtHandler ExtHandler Certificate with thumbprint 7F62010F92251816187A96723BAF08CF598A0E38 has no matching private key. Aug 13 00:02:04.380956 waagent[1781]: 2025-08-13T00:02:04.380903Z INFO ExtHandler ExtHandler Fetch goal state completed Aug 13 00:02:04.396266 waagent[1781]: 2025-08-13T00:02:04.396206Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: 930f1af2-2eab-47ad-bf72-d6ef6c5b75a8 New eTag: 2880259175860311389] Aug 13 00:02:04.396913 waagent[1781]: 2025-08-13T00:02:04.396828Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Aug 13 00:02:04.512466 waagent[1781]: 2025-08-13T00:02:04.512307Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.8; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Aug 13 00:02:04.535088 waagent[1781]: 2025-08-13T00:02:04.534987Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1781 Aug 13 00:02:04.539079 waagent[1781]: 2025-08-13T00:02:04.538998Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.8', '', 'Flatcar Container Linux by Kinvolk'] Aug 13 00:02:04.540508 waagent[1781]: 2025-08-13T00:02:04.540444Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Aug 13 00:02:04.688737 waagent[1781]: 2025-08-13T00:02:04.688662Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Aug 13 00:02:04.689231 waagent[1781]: 2025-08-13T00:02:04.689168Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Aug 13 00:02:04.697373 waagent[1781]: 2025-08-13T00:02:04.697300Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Aug 13 00:02:04.697996 waagent[1781]: 2025-08-13T00:02:04.697934Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Aug 13 00:02:04.699245 waagent[1781]: 2025-08-13T00:02:04.699175Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Aug 13 00:02:04.700673 waagent[1781]: 2025-08-13T00:02:04.700599Z INFO ExtHandler ExtHandler Starting env monitor service. Aug 13 00:02:04.701381 waagent[1781]: 2025-08-13T00:02:04.701318Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Aug 13 00:02:04.701643 waagent[1781]: 2025-08-13T00:02:04.701592Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Aug 13 00:02:04.702348 waagent[1781]: 2025-08-13T00:02:04.702290Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Aug 13 00:02:04.702755 waagent[1781]: 2025-08-13T00:02:04.702701Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Aug 13 00:02:04.702755 waagent[1781]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Aug 13 00:02:04.702755 waagent[1781]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Aug 13 00:02:04.702755 waagent[1781]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Aug 13 00:02:04.702755 waagent[1781]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Aug 13 00:02:04.702755 waagent[1781]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Aug 13 00:02:04.702755 waagent[1781]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Aug 13 00:02:04.705495 waagent[1781]: 2025-08-13T00:02:04.705305Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Aug 13 00:02:04.706523 waagent[1781]: 2025-08-13T00:02:04.706454Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Aug 13 00:02:04.706803 waagent[1781]: 2025-08-13T00:02:04.706750Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Aug 13 00:02:04.707518 waagent[1781]: 2025-08-13T00:02:04.707456Z INFO EnvHandler ExtHandler Configure routes Aug 13 00:02:04.707754 waagent[1781]: 2025-08-13T00:02:04.707707Z INFO EnvHandler ExtHandler Gateway:None Aug 13 00:02:04.708004 waagent[1781]: 2025-08-13T00:02:04.707956Z INFO EnvHandler ExtHandler Routes:None Aug 13 00:02:04.709056 waagent[1781]: 2025-08-13T00:02:04.708995Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Aug 13 00:02:04.709151 waagent[1781]: 2025-08-13T00:02:04.709086Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Aug 13 00:02:04.709913 waagent[1781]: 2025-08-13T00:02:04.709815Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Aug 13 00:02:04.710008 waagent[1781]: 2025-08-13T00:02:04.709944Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Aug 13 00:02:04.710536 waagent[1781]: 2025-08-13T00:02:04.710465Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Aug 13 00:02:04.728299 waagent[1781]: 2025-08-13T00:02:04.728223Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Aug 13 00:02:04.729007 waagent[1781]: 2025-08-13T00:02:04.728952Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Aug 13 00:02:04.730009 waagent[1781]: 2025-08-13T00:02:04.729947Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Aug 13 00:02:04.758257 waagent[1781]: 2025-08-13T00:02:04.758119Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1771' Aug 13 00:02:04.775023 waagent[1781]: 2025-08-13T00:02:04.774947Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Aug 13 00:02:04.860909 waagent[1781]: 2025-08-13T00:02:04.860677Z INFO MonitorHandler ExtHandler Network interfaces: Aug 13 00:02:04.860909 waagent[1781]: Executing ['ip', '-a', '-o', 'link']: Aug 13 00:02:04.860909 waagent[1781]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Aug 13 00:02:04.860909 waagent[1781]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:6e:60:05 brd ff:ff:ff:ff:ff:ff Aug 13 00:02:04.860909 waagent[1781]: 3: enP43685s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:6e:60:05 brd ff:ff:ff:ff:ff:ff\ altname enP43685p0s2 Aug 13 00:02:04.860909 waagent[1781]: Executing ['ip', '-4', '-a', '-o', 'address']: Aug 13 00:02:04.860909 waagent[1781]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Aug 13 00:02:04.860909 waagent[1781]: 2: eth0 inet 10.200.20.38/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Aug 13 00:02:04.860909 waagent[1781]: Executing ['ip', '-6', '-a', '-o', 'address']: Aug 13 00:02:04.860909 waagent[1781]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Aug 13 00:02:04.860909 waagent[1781]: 2: eth0 inet6 fe80::20d:3aff:fe6e:6005/64 scope link \ valid_lft forever preferred_lft forever Aug 13 00:02:05.276717 waagent[1781]: 2025-08-13T00:02:05.276561Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules Aug 13 00:02:05.280386 waagent[1781]: 2025-08-13T00:02:05.280253Z INFO EnvHandler ExtHandler Firewall rules: Aug 13 00:02:05.280386 waagent[1781]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Aug 13 00:02:05.280386 waagent[1781]: pkts bytes target prot opt in out source destination Aug 13 00:02:05.280386 waagent[1781]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Aug 13 00:02:05.280386 waagent[1781]: pkts bytes target prot opt in out source destination Aug 13 00:02:05.280386 waagent[1781]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Aug 13 00:02:05.280386 waagent[1781]: pkts bytes target prot opt in out source destination Aug 13 00:02:05.280386 waagent[1781]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Aug 13 00:02:05.280386 waagent[1781]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Aug 13 00:02:05.282296 waagent[1781]: 2025-08-13T00:02:05.282242Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Aug 13 00:02:05.477888 waagent[1781]: 2025-08-13T00:02:05.477798Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.14.0.1 -- exiting Aug 13 00:02:05.976772 waagent[1696]: 2025-08-13T00:02:05.976653Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Aug 13 00:02:05.982851 waagent[1696]: 2025-08-13T00:02:05.982790Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.14.0.1 to be the latest agent Aug 13 00:02:07.340895 waagent[1821]: 2025-08-13T00:02:07.340780Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.14.0.1) Aug 13 00:02:07.342023 waagent[1821]: 2025-08-13T00:02:07.341960Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.8 Aug 13 00:02:07.342276 waagent[1821]: 2025-08-13T00:02:07.342229Z INFO ExtHandler ExtHandler Python: 3.9.16 Aug 13 00:02:07.342501 waagent[1821]: 2025-08-13T00:02:07.342455Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Aug 13 00:02:07.357694 waagent[1821]: 2025-08-13T00:02:07.357566Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.8; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; Arch: aarch64; systemd: True; systemd_version: systemd 252 (252); LISDrivers: Absent; logrotate: logrotate 3.20.1; Aug 13 00:02:07.358381 waagent[1821]: 2025-08-13T00:02:07.358323Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Aug 13 00:02:07.358658 waagent[1821]: 2025-08-13T00:02:07.358611Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Aug 13 00:02:07.359030 waagent[1821]: 2025-08-13T00:02:07.358978Z INFO ExtHandler ExtHandler Initializing the goal state... Aug 13 00:02:07.373233 waagent[1821]: 2025-08-13T00:02:07.373140Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Aug 13 00:02:07.386060 waagent[1821]: 2025-08-13T00:02:07.385993Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Aug 13 00:02:07.387414 waagent[1821]: 2025-08-13T00:02:07.387346Z INFO ExtHandler Aug 13 00:02:07.387704 waagent[1821]: 2025-08-13T00:02:07.387655Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 7c1aab3d-3cdb-4547-885a-38cde34ae1f2 eTag: 2880259175860311389 source: Fabric] Aug 13 00:02:07.388627 waagent[1821]: 2025-08-13T00:02:07.388572Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Aug 13 00:02:07.390062 waagent[1821]: 2025-08-13T00:02:07.390004Z INFO ExtHandler Aug 13 00:02:07.390317 waagent[1821]: 2025-08-13T00:02:07.390269Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Aug 13 00:02:07.397720 waagent[1821]: 2025-08-13T00:02:07.397664Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Aug 13 00:02:07.398489 waagent[1821]: 2025-08-13T00:02:07.398444Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Aug 13 00:02:07.460946 waagent[1821]: 2025-08-13T00:02:07.460860Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Aug 13 00:02:07.542975 waagent[1821]: 2025-08-13T00:02:07.542818Z INFO ExtHandler Downloaded certificate {'thumbprint': '7F62010F92251816187A96723BAF08CF598A0E38', 'hasPrivateKey': False} Aug 13 00:02:07.544413 waagent[1821]: 2025-08-13T00:02:07.544344Z INFO ExtHandler Downloaded certificate {'thumbprint': '542A17A50D5069BEEAC63305D1CE103574612148', 'hasPrivateKey': True} Aug 13 00:02:07.545727 waagent[1821]: 2025-08-13T00:02:07.545667Z INFO ExtHandler Fetch goal state from WireServer completed Aug 13 00:02:07.546805 waagent[1821]: 2025-08-13T00:02:07.546748Z INFO ExtHandler ExtHandler Goal state initialization completed. Aug 13 00:02:07.568493 waagent[1821]: 2025-08-13T00:02:07.568366Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.0.15 3 Sep 2024 (Library: OpenSSL 3.0.15 3 Sep 2024) Aug 13 00:02:07.577820 waagent[1821]: 2025-08-13T00:02:07.577701Z INFO ExtHandler ExtHandler Using iptables [version 1.8.8] to manage firewall rules Aug 13 00:02:07.582289 waagent[1821]: 2025-08-13T00:02:07.582161Z INFO ExtHandler ExtHandler Did not find a legacy firewall rule: ['iptables', '-w', '-t', 'security', '-C', 'OUTPUT', '-d', '168.63.129.16', '-p', 'tcp', '-m', 'conntrack', '--ctstate', 'INVALID,NEW', '-j', 'ACCEPT'] Aug 13 00:02:07.582681 waagent[1821]: 2025-08-13T00:02:07.582631Z INFO ExtHandler ExtHandler Checking state of the firewall Aug 13 00:02:07.658019 waagent[1821]: 2025-08-13T00:02:07.657869Z WARNING ExtHandler ExtHandler The firewall rules for Azure Fabric are not setup correctly (the environment thread will fix it): The following rules are missing: ['ACCEPT DNS'] due to: ['iptables: Bad rule (does a matching rule exist in that chain?).\n']. Current state: Aug 13 00:02:07.658019 waagent[1821]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Aug 13 00:02:07.658019 waagent[1821]: pkts bytes target prot opt in out source destination Aug 13 00:02:07.658019 waagent[1821]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Aug 13 00:02:07.658019 waagent[1821]: pkts bytes target prot opt in out source destination Aug 13 00:02:07.658019 waagent[1821]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Aug 13 00:02:07.658019 waagent[1821]: pkts bytes target prot opt in out source destination Aug 13 00:02:07.658019 waagent[1821]: 132 13264 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Aug 13 00:02:07.658019 waagent[1821]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Aug 13 00:02:07.661374 waagent[1821]: 2025-08-13T00:02:07.661297Z INFO ExtHandler ExtHandler Setting up persistent firewall rules Aug 13 00:02:07.665080 waagent[1821]: 2025-08-13T00:02:07.664946Z INFO ExtHandler ExtHandler The firewalld service is not present on the system Aug 13 00:02:07.665541 waagent[1821]: 2025-08-13T00:02:07.665490Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Aug 13 00:02:07.666071 waagent[1821]: 2025-08-13T00:02:07.666016Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Aug 13 00:02:07.675214 waagent[1821]: 2025-08-13T00:02:07.675152Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Aug 13 00:02:07.676063 waagent[1821]: 2025-08-13T00:02:07.676001Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Aug 13 00:02:07.685120 waagent[1821]: 2025-08-13T00:02:07.685033Z INFO ExtHandler ExtHandler WALinuxAgent-2.14.0.1 running as process 1821 Aug 13 00:02:07.688891 waagent[1821]: 2025-08-13T00:02:07.688789Z INFO ExtHandler ExtHandler [CGI] Cgroups is not currently supported on ['flatcar', '3510.3.8', '', 'Flatcar Container Linux by Kinvolk'] Aug 13 00:02:07.690044 waagent[1821]: 2025-08-13T00:02:07.689982Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case cgroup usage went from enabled to disabled Aug 13 00:02:07.691145 waagent[1821]: 2025-08-13T00:02:07.691090Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Aug 13 00:02:07.694269 waagent[1821]: 2025-08-13T00:02:07.694204Z INFO ExtHandler ExtHandler Signing certificate written to /var/lib/waagent/microsoft_root_certificate.pem Aug 13 00:02:07.694828 waagent[1821]: 2025-08-13T00:02:07.694774Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Aug 13 00:02:07.696426 waagent[1821]: 2025-08-13T00:02:07.696359Z INFO ExtHandler ExtHandler Starting env monitor service. Aug 13 00:02:07.696817 waagent[1821]: 2025-08-13T00:02:07.696746Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Aug 13 00:02:07.697363 waagent[1821]: 2025-08-13T00:02:07.697294Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Aug 13 00:02:07.698030 waagent[1821]: 2025-08-13T00:02:07.697964Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Aug 13 00:02:07.698804 waagent[1821]: 2025-08-13T00:02:07.698733Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Aug 13 00:02:07.699025 waagent[1821]: 2025-08-13T00:02:07.698968Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Aug 13 00:02:07.699369 waagent[1821]: 2025-08-13T00:02:07.699302Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Aug 13 00:02:07.699369 waagent[1821]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Aug 13 00:02:07.699369 waagent[1821]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Aug 13 00:02:07.699369 waagent[1821]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Aug 13 00:02:07.699369 waagent[1821]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Aug 13 00:02:07.699369 waagent[1821]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Aug 13 00:02:07.699369 waagent[1821]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Aug 13 00:02:07.699788 waagent[1821]: 2025-08-13T00:02:07.699729Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Aug 13 00:02:07.703175 waagent[1821]: 2025-08-13T00:02:07.703019Z INFO EnvHandler ExtHandler Configure routes Aug 13 00:02:07.704129 waagent[1821]: 2025-08-13T00:02:07.704052Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Aug 13 00:02:07.704574 waagent[1821]: 2025-08-13T00:02:07.704513Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Aug 13 00:02:07.706069 waagent[1821]: 2025-08-13T00:02:07.705987Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Aug 13 00:02:07.706320 waagent[1821]: 2025-08-13T00:02:07.706254Z INFO EnvHandler ExtHandler Gateway:None Aug 13 00:02:07.706590 waagent[1821]: 2025-08-13T00:02:07.706522Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Aug 13 00:02:07.706888 waagent[1821]: 2025-08-13T00:02:07.706803Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Aug 13 00:02:07.707596 waagent[1821]: 2025-08-13T00:02:07.707532Z INFO EnvHandler ExtHandler Routes:None Aug 13 00:02:07.729293 waagent[1821]: 2025-08-13T00:02:07.729082Z INFO MonitorHandler ExtHandler Network interfaces: Aug 13 00:02:07.729293 waagent[1821]: Executing ['ip', '-a', '-o', 'link']: Aug 13 00:02:07.729293 waagent[1821]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Aug 13 00:02:07.729293 waagent[1821]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:6e:60:05 brd ff:ff:ff:ff:ff:ff Aug 13 00:02:07.729293 waagent[1821]: 3: enP43685s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:6e:60:05 brd ff:ff:ff:ff:ff:ff\ altname enP43685p0s2 Aug 13 00:02:07.729293 waagent[1821]: Executing ['ip', '-4', '-a', '-o', 'address']: Aug 13 00:02:07.729293 waagent[1821]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Aug 13 00:02:07.729293 waagent[1821]: 2: eth0 inet 10.200.20.38/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Aug 13 00:02:07.729293 waagent[1821]: Executing ['ip', '-6', '-a', '-o', 'address']: Aug 13 00:02:07.729293 waagent[1821]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Aug 13 00:02:07.729293 waagent[1821]: 2: eth0 inet6 fe80::20d:3aff:fe6e:6005/64 scope link \ valid_lft forever preferred_lft forever Aug 13 00:02:07.740097 waagent[1821]: 2025-08-13T00:02:07.739996Z INFO ExtHandler ExtHandler Downloading agent manifest Aug 13 00:02:07.757518 waagent[1821]: 2025-08-13T00:02:07.757422Z INFO ExtHandler ExtHandler Aug 13 00:02:07.758572 waagent[1821]: 2025-08-13T00:02:07.758498Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 972042fe-25ed-44c9-a52a-410dcf028c72 correlation 2e9f62ce-ca8f-425d-8883-efe363f54f92 created: 2025-08-13T00:00:07.825676Z] Aug 13 00:02:07.761556 waagent[1821]: 2025-08-13T00:02:07.761485Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Aug 13 00:02:07.765252 waagent[1821]: 2025-08-13T00:02:07.765176Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 7 ms] Aug 13 00:02:07.790981 waagent[1821]: 2025-08-13T00:02:07.790908Z INFO ExtHandler ExtHandler Looking for existing remote access users. Aug 13 00:02:07.793711 waagent[1821]: 2025-08-13T00:02:07.793644Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.14.0.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 16C885DA-71B3-4377-8187-2A6E2D760C67;UpdateGSErrors: 0;AutoUpdate: 1;UpdateMode: SelfUpdate;] Aug 13 00:02:07.814241 waagent[1821]: 2025-08-13T00:02:07.814154Z INFO EnvHandler ExtHandler Using iptables [version 1.8.8] to manage firewall rules Aug 13 00:02:07.829997 waagent[1821]: 2025-08-13T00:02:07.829839Z WARNING EnvHandler ExtHandler The firewall is not configured correctly. The following rules are missing: ['ACCEPT DNS'] due to: ['iptables: Bad rule (does a matching rule exist in that chain?).\n']. Will reset it. Current state: Aug 13 00:02:07.829997 waagent[1821]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Aug 13 00:02:07.829997 waagent[1821]: pkts bytes target prot opt in out source destination Aug 13 00:02:07.829997 waagent[1821]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Aug 13 00:02:07.829997 waagent[1821]: pkts bytes target prot opt in out source destination Aug 13 00:02:07.829997 waagent[1821]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Aug 13 00:02:07.829997 waagent[1821]: pkts bytes target prot opt in out source destination Aug 13 00:02:07.829997 waagent[1821]: 161 19731 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Aug 13 00:02:07.829997 waagent[1821]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Aug 13 00:02:07.899241 waagent[1821]: 2025-08-13T00:02:07.899115Z INFO EnvHandler ExtHandler The firewall was setup successfully: Aug 13 00:02:07.899241 waagent[1821]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Aug 13 00:02:07.899241 waagent[1821]: pkts bytes target prot opt in out source destination Aug 13 00:02:07.899241 waagent[1821]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Aug 13 00:02:07.899241 waagent[1821]: pkts bytes target prot opt in out source destination Aug 13 00:02:07.899241 waagent[1821]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Aug 13 00:02:07.899241 waagent[1821]: pkts bytes target prot opt in out source destination Aug 13 00:02:07.899241 waagent[1821]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Aug 13 00:02:07.899241 waagent[1821]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Aug 13 00:02:07.899241 waagent[1821]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Aug 13 00:02:12.851555 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 13 00:02:12.851719 systemd[1]: Stopped kubelet.service. Aug 13 00:02:12.853265 systemd[1]: Starting kubelet.service... Aug 13 00:02:13.159066 systemd[1]: Started kubelet.service. Aug 13 00:02:13.202418 kubelet[1877]: E0813 00:02:13.202377 1877 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:02:13.204417 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:02:13.204551 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:02:22.744663 systemd[1]: Created slice system-sshd.slice. Aug 13 00:02:22.745956 systemd[1]: Started sshd@0-10.200.20.38:22-10.200.16.10:34946.service. Aug 13 00:02:23.351450 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Aug 13 00:02:23.351624 systemd[1]: Stopped kubelet.service. Aug 13 00:02:23.353164 systemd[1]: Starting kubelet.service... Aug 13 00:02:23.440060 sshd[1884]: Accepted publickey for core from 10.200.16.10 port 34946 ssh2: RSA SHA256:J/je3NSfm2Jr+TQ4JtJfZPKSEEtI0uL9aC1/9TbPR4M Aug 13 00:02:23.665137 sshd[1884]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:02:23.669647 systemd[1]: Started session-3.scope. Aug 13 00:02:23.669977 systemd-logind[1562]: New session 3 of user core. Aug 13 00:02:23.819305 systemd[1]: Started kubelet.service. Aug 13 00:02:23.865898 kubelet[1896]: E0813 00:02:23.865839 1896 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:02:23.867575 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:02:23.867712 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:02:24.022996 systemd[1]: Started sshd@1-10.200.20.38:22-10.200.16.10:34962.service. Aug 13 00:02:24.509739 sshd[1904]: Accepted publickey for core from 10.200.16.10 port 34962 ssh2: RSA SHA256:J/je3NSfm2Jr+TQ4JtJfZPKSEEtI0uL9aC1/9TbPR4M Aug 13 00:02:24.511384 sshd[1904]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:02:24.515566 systemd[1]: Started session-4.scope. Aug 13 00:02:24.516638 systemd-logind[1562]: New session 4 of user core. Aug 13 00:02:24.858538 sshd[1904]: pam_unix(sshd:session): session closed for user core Aug 13 00:02:24.861343 systemd-logind[1562]: Session 4 logged out. Waiting for processes to exit. Aug 13 00:02:24.862228 systemd[1]: sshd@1-10.200.20.38:22-10.200.16.10:34962.service: Deactivated successfully. Aug 13 00:02:24.863040 systemd[1]: session-4.scope: Deactivated successfully. Aug 13 00:02:24.863652 systemd-logind[1562]: Removed session 4. Aug 13 00:02:24.938772 systemd[1]: Started sshd@2-10.200.20.38:22-10.200.16.10:34974.service. Aug 13 00:02:25.426586 sshd[1911]: Accepted publickey for core from 10.200.16.10 port 34974 ssh2: RSA SHA256:J/je3NSfm2Jr+TQ4JtJfZPKSEEtI0uL9aC1/9TbPR4M Aug 13 00:02:25.427937 sshd[1911]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:02:25.432386 systemd[1]: Started session-5.scope. Aug 13 00:02:25.432565 systemd-logind[1562]: New session 5 of user core. Aug 13 00:02:25.772028 sshd[1911]: pam_unix(sshd:session): session closed for user core Aug 13 00:02:25.774934 systemd[1]: sshd@2-10.200.20.38:22-10.200.16.10:34974.service: Deactivated successfully. Aug 13 00:02:25.775676 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 00:02:25.776610 systemd-logind[1562]: Session 5 logged out. Waiting for processes to exit. Aug 13 00:02:25.777441 systemd-logind[1562]: Removed session 5. Aug 13 00:02:25.848556 systemd[1]: Started sshd@3-10.200.20.38:22-10.200.16.10:34988.service. Aug 13 00:02:26.321335 sshd[1918]: Accepted publickey for core from 10.200.16.10 port 34988 ssh2: RSA SHA256:J/je3NSfm2Jr+TQ4JtJfZPKSEEtI0uL9aC1/9TbPR4M Aug 13 00:02:26.323023 sshd[1918]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:02:26.327347 systemd[1]: Started session-6.scope. Aug 13 00:02:26.327992 systemd-logind[1562]: New session 6 of user core. Aug 13 00:02:26.663565 sshd[1918]: pam_unix(sshd:session): session closed for user core Aug 13 00:02:26.666746 systemd[1]: sshd@3-10.200.20.38:22-10.200.16.10:34988.service: Deactivated successfully. Aug 13 00:02:26.667507 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 00:02:26.668759 systemd-logind[1562]: Session 6 logged out. Waiting for processes to exit. Aug 13 00:02:26.669538 systemd-logind[1562]: Removed session 6. Aug 13 00:02:26.742609 systemd[1]: Started sshd@4-10.200.20.38:22-10.200.16.10:34998.service. Aug 13 00:02:27.230592 sshd[1925]: Accepted publickey for core from 10.200.16.10 port 34998 ssh2: RSA SHA256:J/je3NSfm2Jr+TQ4JtJfZPKSEEtI0uL9aC1/9TbPR4M Aug 13 00:02:27.232211 sshd[1925]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:02:27.236430 systemd[1]: Started session-7.scope. Aug 13 00:02:27.237537 systemd-logind[1562]: New session 7 of user core. Aug 13 00:02:27.357415 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Aug 13 00:02:27.891222 sudo[1929]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 00:02:27.891455 sudo[1929]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 13 00:02:27.926643 systemd[1]: Starting docker.service... Aug 13 00:02:27.982483 env[1939]: time="2025-08-13T00:02:27.982438177Z" level=info msg="Starting up" Aug 13 00:02:27.986380 env[1939]: time="2025-08-13T00:02:27.986338355Z" level=info msg="parsed scheme: \"unix\"" module=grpc Aug 13 00:02:27.986380 env[1939]: time="2025-08-13T00:02:27.986367635Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Aug 13 00:02:27.986531 env[1939]: time="2025-08-13T00:02:27.986389595Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Aug 13 00:02:27.986531 env[1939]: time="2025-08-13T00:02:27.986401275Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Aug 13 00:02:27.988333 env[1939]: time="2025-08-13T00:02:27.988035042Z" level=info msg="parsed scheme: \"unix\"" module=grpc Aug 13 00:02:27.988425 env[1939]: time="2025-08-13T00:02:27.988340124Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Aug 13 00:02:27.988425 env[1939]: time="2025-08-13T00:02:27.988361084Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Aug 13 00:02:27.988425 env[1939]: time="2025-08-13T00:02:27.988370564Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Aug 13 00:02:27.994027 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2246698001-merged.mount: Deactivated successfully. Aug 13 00:02:28.128061 env[1939]: time="2025-08-13T00:02:28.128018316Z" level=warning msg="Your kernel does not support cgroup blkio weight" Aug 13 00:02:28.128061 env[1939]: time="2025-08-13T00:02:28.128049556Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Aug 13 00:02:28.128265 env[1939]: time="2025-08-13T00:02:28.128204397Z" level=info msg="Loading containers: start." Aug 13 00:02:28.367916 kernel: Initializing XFRM netlink socket Aug 13 00:02:28.401409 env[1939]: time="2025-08-13T00:02:28.401373669Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Aug 13 00:02:28.587269 systemd-networkd[1771]: docker0: Link UP Aug 13 00:02:28.616125 env[1939]: time="2025-08-13T00:02:28.616084094Z" level=info msg="Loading containers: done." Aug 13 00:02:28.637311 env[1939]: time="2025-08-13T00:02:28.636975582Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 13 00:02:28.637642 env[1939]: time="2025-08-13T00:02:28.637622785Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Aug 13 00:02:28.637812 env[1939]: time="2025-08-13T00:02:28.637797506Z" level=info msg="Daemon has completed initialization" Aug 13 00:02:28.667965 systemd[1]: Started docker.service. Aug 13 00:02:28.672621 env[1939]: time="2025-08-13T00:02:28.672576732Z" level=info msg="API listen on /run/docker.sock" Aug 13 00:02:32.395668 env[1581]: time="2025-08-13T00:02:32.395622498Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\"" Aug 13 00:02:33.279048 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4081083286.mount: Deactivated successfully. Aug 13 00:02:34.101468 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Aug 13 00:02:34.101638 systemd[1]: Stopped kubelet.service. Aug 13 00:02:34.103166 systemd[1]: Starting kubelet.service... Aug 13 00:02:34.212857 systemd[1]: Started kubelet.service. Aug 13 00:02:34.264109 kubelet[2060]: E0813 00:02:34.264068 2060 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:02:34.265902 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:02:34.266045 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:02:34.622905 env[1581]: time="2025-08-13T00:02:34.622833144Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:02:34.627976 env[1581]: time="2025-08-13T00:02:34.627925879Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:00a68b619a4bfa14c989a2181a7aa0726a5cb1272a7f65394e6a594ad6eade27,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:02:34.631860 env[1581]: time="2025-08-13T00:02:34.631819690Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:02:34.635942 env[1581]: time="2025-08-13T00:02:34.635898502Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:02:34.636716 env[1581]: time="2025-08-13T00:02:34.636686504Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\" returns image reference \"sha256:00a68b619a4bfa14c989a2181a7aa0726a5cb1272a7f65394e6a594ad6eade27\"" Aug 13 00:02:34.638609 env[1581]: time="2025-08-13T00:02:34.638583509Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\"" Aug 13 00:02:35.984157 env[1581]: time="2025-08-13T00:02:35.984102267Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:02:35.991716 env[1581]: time="2025-08-13T00:02:35.991670687Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5c5dc52b837451e0fe6108fdfb9cfa431191ce227ce71d103dec8a8c655c4e71,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:02:35.995565 env[1581]: time="2025-08-13T00:02:35.995523057Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:02:35.999249 env[1581]: time="2025-08-13T00:02:35.999198827Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:02:36.000028 env[1581]: time="2025-08-13T00:02:35.999999669Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\" returns image reference \"sha256:5c5dc52b837451e0fe6108fdfb9cfa431191ce227ce71d103dec8a8c655c4e71\"" Aug 13 00:02:36.000647 env[1581]: time="2025-08-13T00:02:36.000621831Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\"" Aug 13 00:02:36.500913 update_engine[1564]: I0813 00:02:36.500828 1564 update_attempter.cc:509] Updating boot flags... Aug 13 00:02:37.226298 env[1581]: time="2025-08-13T00:02:37.226254359Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:02:37.233764 env[1581]: time="2025-08-13T00:02:37.233719377Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:89be0efdc4ab1793b9b1b05e836e33dc50f5b2911b57609b315b58608b2d3746,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:02:37.238018 env[1581]: time="2025-08-13T00:02:37.237982867Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:02:37.242829 env[1581]: time="2025-08-13T00:02:37.242789358Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:02:37.243657 env[1581]: time="2025-08-13T00:02:37.243627800Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\" returns image reference \"sha256:89be0efdc4ab1793b9b1b05e836e33dc50f5b2911b57609b315b58608b2d3746\"" Aug 13 00:02:37.244976 env[1581]: time="2025-08-13T00:02:37.244951563Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\"" Aug 13 00:02:38.334545 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1184966593.mount: Deactivated successfully. Aug 13 00:02:38.810163 env[1581]: time="2025-08-13T00:02:38.810107416Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:02:38.814803 env[1581]: time="2025-08-13T00:02:38.814765786Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d1e7db6660181423f98acbe3a495b3fe5cec9b85cdef245540cc2cb3b180ab0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:02:38.820148 env[1581]: time="2025-08-13T00:02:38.820099318Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:02:38.824008 env[1581]: time="2025-08-13T00:02:38.823961727Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:02:38.824274 env[1581]: time="2025-08-13T00:02:38.824245407Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\" returns image reference \"sha256:7d1e7db6660181423f98acbe3a495b3fe5cec9b85cdef245540cc2cb3b180ab0\"" Aug 13 00:02:38.824782 env[1581]: time="2025-08-13T00:02:38.824755608Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 00:02:39.541001 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount773535110.mount: Deactivated successfully. Aug 13 00:02:40.980267 env[1581]: time="2025-08-13T00:02:40.980206575Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:02:40.985812 env[1581]: time="2025-08-13T00:02:40.985617145Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:02:40.991182 env[1581]: time="2025-08-13T00:02:40.991135796Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:02:40.994982 env[1581]: time="2025-08-13T00:02:40.994935283Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:02:40.996207 env[1581]: time="2025-08-13T00:02:40.995656725Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Aug 13 00:02:40.996933 env[1581]: time="2025-08-13T00:02:40.996900687Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 13 00:02:41.577902 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2941958330.mount: Deactivated successfully. Aug 13 00:02:41.605693 env[1581]: time="2025-08-13T00:02:41.605640637Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:02:41.612018 env[1581]: time="2025-08-13T00:02:41.611969048Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:02:41.615230 env[1581]: time="2025-08-13T00:02:41.615183774Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:02:41.618743 env[1581]: time="2025-08-13T00:02:41.618708981Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:02:41.619206 env[1581]: time="2025-08-13T00:02:41.619174461Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Aug 13 00:02:41.620082 env[1581]: time="2025-08-13T00:02:41.620038583Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Aug 13 00:02:42.303544 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4079153442.mount: Deactivated successfully. Aug 13 00:02:44.351419 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Aug 13 00:02:44.351589 systemd[1]: Stopped kubelet.service. Aug 13 00:02:44.353095 systemd[1]: Starting kubelet.service... Aug 13 00:02:44.452372 systemd[1]: Started kubelet.service. Aug 13 00:02:44.502219 kubelet[2167]: E0813 00:02:44.502172 2167 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:02:44.503586 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:02:44.503733 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:02:46.225495 env[1581]: time="2025-08-13T00:02:46.225449512Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:02:46.231745 env[1581]: time="2025-08-13T00:02:46.231705640Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:02:46.236153 env[1581]: time="2025-08-13T00:02:46.236105286Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:02:46.241471 env[1581]: time="2025-08-13T00:02:46.241434213Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:02:46.242212 env[1581]: time="2025-08-13T00:02:46.242181854Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Aug 13 00:02:50.965191 systemd[1]: Stopped kubelet.service. Aug 13 00:02:50.967424 systemd[1]: Starting kubelet.service... Aug 13 00:02:51.003440 systemd[1]: Reloading. Aug 13 00:02:51.072972 /usr/lib/systemd/system-generators/torcx-generator[2225]: time="2025-08-13T00:02:51Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Aug 13 00:02:51.073360 /usr/lib/systemd/system-generators/torcx-generator[2225]: time="2025-08-13T00:02:51Z" level=info msg="torcx already run" Aug 13 00:02:51.164420 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 00:02:51.164441 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 00:02:51.180745 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:02:51.396458 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 13 00:02:51.396559 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 13 00:02:51.397386 systemd[1]: Stopped kubelet.service. Aug 13 00:02:51.400380 systemd[1]: Starting kubelet.service... Aug 13 00:02:51.498593 systemd[1]: Started kubelet.service. Aug 13 00:02:51.549629 kubelet[2295]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:02:51.549629 kubelet[2295]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 00:02:51.549629 kubelet[2295]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:02:51.550062 kubelet[2295]: I0813 00:02:51.549680 2295 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 00:02:52.582318 kubelet[2295]: I0813 00:02:52.582275 2295 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 00:02:52.582318 kubelet[2295]: I0813 00:02:52.582309 2295 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 00:02:52.582693 kubelet[2295]: I0813 00:02:52.582560 2295 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 00:02:52.613179 kubelet[2295]: E0813 00:02:52.613134 2295 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.38:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:02:52.615212 kubelet[2295]: I0813 00:02:52.615176 2295 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:02:52.622362 kubelet[2295]: E0813 00:02:52.622292 2295 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 00:02:52.622547 kubelet[2295]: I0813 00:02:52.622534 2295 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 00:02:52.626815 kubelet[2295]: I0813 00:02:52.626783 2295 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 00:02:52.628034 kubelet[2295]: I0813 00:02:52.628010 2295 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 00:02:52.628344 kubelet[2295]: I0813 00:02:52.628315 2295 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 00:02:52.628590 kubelet[2295]: I0813 00:02:52.628409 2295 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.8-a-af9fafecff","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Aug 13 00:02:52.628728 kubelet[2295]: I0813 00:02:52.628713 2295 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 00:02:52.628795 kubelet[2295]: I0813 00:02:52.628786 2295 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 00:02:52.629015 kubelet[2295]: I0813 00:02:52.629002 2295 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:02:52.634835 kubelet[2295]: I0813 00:02:52.634804 2295 kubelet.go:408] "Attempting to sync node with API server" Aug 13 00:02:52.635048 kubelet[2295]: I0813 00:02:52.635035 2295 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 00:02:52.635148 kubelet[2295]: I0813 00:02:52.635138 2295 kubelet.go:314] "Adding apiserver pod source" Aug 13 00:02:52.635206 kubelet[2295]: I0813 00:02:52.635198 2295 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 00:02:52.642224 kubelet[2295]: W0813 00:02:52.641984 2295 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.38:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-a-af9fafecff&limit=500&resourceVersion=0": dial tcp 10.200.20.38:6443: connect: connection refused Aug 13 00:02:52.642224 kubelet[2295]: E0813 00:02:52.642060 2295 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.38:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-a-af9fafecff&limit=500&resourceVersion=0\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:02:52.642517 kubelet[2295]: W0813 00:02:52.642473 2295 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.38:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.38:6443: connect: connection refused Aug 13 00:02:52.642570 kubelet[2295]: E0813 00:02:52.642518 2295 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.38:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:02:52.642625 kubelet[2295]: I0813 00:02:52.642603 2295 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Aug 13 00:02:52.643088 kubelet[2295]: I0813 00:02:52.643070 2295 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 00:02:52.643150 kubelet[2295]: W0813 00:02:52.643119 2295 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 00:02:52.643841 kubelet[2295]: I0813 00:02:52.643819 2295 server.go:1274] "Started kubelet" Aug 13 00:02:52.661063 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Aug 13 00:02:52.661247 kubelet[2295]: I0813 00:02:52.661219 2295 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 00:02:52.665584 kubelet[2295]: E0813 00:02:52.664519 2295 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.38:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.38:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.8-a-af9fafecff.185b2aa617d64681 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.8-a-af9fafecff,UID:ci-3510.3.8-a-af9fafecff,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.8-a-af9fafecff,},FirstTimestamp:2025-08-13 00:02:52.643796609 +0000 UTC m=+1.137047464,LastTimestamp:2025-08-13 00:02:52.643796609 +0000 UTC m=+1.137047464,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.8-a-af9fafecff,}" Aug 13 00:02:52.667212 kubelet[2295]: E0813 00:02:52.667190 2295 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 00:02:52.667917 kubelet[2295]: I0813 00:02:52.667868 2295 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 00:02:52.668231 kubelet[2295]: E0813 00:02:52.668200 2295 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-a-af9fafecff\" not found" Aug 13 00:02:52.668779 kubelet[2295]: I0813 00:02:52.668749 2295 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 00:02:52.669064 kubelet[2295]: I0813 00:02:52.669047 2295 reconciler.go:26] "Reconciler: start to sync state" Aug 13 00:02:52.669859 kubelet[2295]: E0813 00:02:52.669649 2295 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-a-af9fafecff?timeout=10s\": dial tcp 10.200.20.38:6443: connect: connection refused" interval="200ms" Aug 13 00:02:52.670233 kubelet[2295]: I0813 00:02:52.670203 2295 factory.go:221] Registration of the systemd container factory successfully Aug 13 00:02:52.670337 kubelet[2295]: I0813 00:02:52.670308 2295 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 00:02:52.670698 kubelet[2295]: W0813 00:02:52.670658 2295 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.38:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.38:6443: connect: connection refused Aug 13 00:02:52.670756 kubelet[2295]: E0813 00:02:52.670709 2295 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.38:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:02:52.671120 kubelet[2295]: I0813 00:02:52.671081 2295 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 00:02:52.673146 kubelet[2295]: I0813 00:02:52.672993 2295 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 00:02:52.673313 kubelet[2295]: I0813 00:02:52.673283 2295 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 00:02:52.673387 kubelet[2295]: I0813 00:02:52.673371 2295 server.go:449] "Adding debug handlers to kubelet server" Aug 13 00:02:52.674543 kubelet[2295]: I0813 00:02:52.674375 2295 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 00:02:52.675285 kubelet[2295]: I0813 00:02:52.673616 2295 factory.go:221] Registration of the containerd container factory successfully Aug 13 00:02:52.739953 kubelet[2295]: I0813 00:02:52.739921 2295 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 00:02:52.739953 kubelet[2295]: I0813 00:02:52.739941 2295 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 00:02:52.739953 kubelet[2295]: I0813 00:02:52.739960 2295 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:02:52.745778 kubelet[2295]: I0813 00:02:52.745745 2295 policy_none.go:49] "None policy: Start" Aug 13 00:02:52.746517 kubelet[2295]: I0813 00:02:52.746488 2295 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 00:02:52.746517 kubelet[2295]: I0813 00:02:52.746522 2295 state_mem.go:35] "Initializing new in-memory state store" Aug 13 00:02:52.755141 kubelet[2295]: I0813 00:02:52.755105 2295 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 00:02:52.755280 kubelet[2295]: I0813 00:02:52.755262 2295 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 00:02:52.755320 kubelet[2295]: I0813 00:02:52.755277 2295 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 00:02:52.758147 kubelet[2295]: I0813 00:02:52.758116 2295 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 00:02:52.761955 kubelet[2295]: E0813 00:02:52.761925 2295 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.8-a-af9fafecff\" not found" Aug 13 00:02:52.777645 kubelet[2295]: I0813 00:02:52.777589 2295 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 00:02:52.778699 kubelet[2295]: I0813 00:02:52.778660 2295 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 00:02:52.778699 kubelet[2295]: I0813 00:02:52.778696 2295 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 00:02:52.778830 kubelet[2295]: I0813 00:02:52.778716 2295 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 00:02:52.778830 kubelet[2295]: E0813 00:02:52.778755 2295 kubelet.go:2345] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Aug 13 00:02:52.781614 kubelet[2295]: W0813 00:02:52.781543 2295 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.38:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.38:6443: connect: connection refused Aug 13 00:02:52.781803 kubelet[2295]: E0813 00:02:52.781781 2295 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.38:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:02:52.857587 kubelet[2295]: I0813 00:02:52.857561 2295 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-a-af9fafecff" Aug 13 00:02:52.858131 kubelet[2295]: E0813 00:02:52.858106 2295 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.38:6443/api/v1/nodes\": dial tcp 10.200.20.38:6443: connect: connection refused" node="ci-3510.3.8-a-af9fafecff" Aug 13 00:02:52.870610 kubelet[2295]: E0813 00:02:52.870575 2295 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-a-af9fafecff?timeout=10s\": dial tcp 10.200.20.38:6443: connect: connection refused" interval="400ms" Aug 13 00:02:53.060641 kubelet[2295]: I0813 00:02:53.060617 2295 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-a-af9fafecff" Aug 13 00:02:53.061175 kubelet[2295]: E0813 00:02:53.061152 2295 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.38:6443/api/v1/nodes\": dial tcp 10.200.20.38:6443: connect: connection refused" node="ci-3510.3.8-a-af9fafecff" Aug 13 00:02:53.071472 kubelet[2295]: I0813 00:02:53.071449 2295 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/35b286d0d80660a85c1620c1f87a84cb-ca-certs\") pod \"kube-controller-manager-ci-3510.3.8-a-af9fafecff\" (UID: \"35b286d0d80660a85c1620c1f87a84cb\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-a-af9fafecff" Aug 13 00:02:53.071614 kubelet[2295]: I0813 00:02:53.071601 2295 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/35b286d0d80660a85c1620c1f87a84cb-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.8-a-af9fafecff\" (UID: \"35b286d0d80660a85c1620c1f87a84cb\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-a-af9fafecff" Aug 13 00:02:53.071699 kubelet[2295]: I0813 00:02:53.071687 2295 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/35b286d0d80660a85c1620c1f87a84cb-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.8-a-af9fafecff\" (UID: \"35b286d0d80660a85c1620c1f87a84cb\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-a-af9fafecff" Aug 13 00:02:53.071792 kubelet[2295]: I0813 00:02:53.071778 2295 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/35b286d0d80660a85c1620c1f87a84cb-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.8-a-af9fafecff\" (UID: \"35b286d0d80660a85c1620c1f87a84cb\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-a-af9fafecff" Aug 13 00:02:53.071916 kubelet[2295]: I0813 00:02:53.071861 2295 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4159a08b53cb8b501acb31466a2d8cba-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.8-a-af9fafecff\" (UID: \"4159a08b53cb8b501acb31466a2d8cba\") " pod="kube-system/kube-apiserver-ci-3510.3.8-a-af9fafecff" Aug 13 00:02:53.072016 kubelet[2295]: I0813 00:02:53.072003 2295 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/35b286d0d80660a85c1620c1f87a84cb-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.8-a-af9fafecff\" (UID: \"35b286d0d80660a85c1620c1f87a84cb\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-a-af9fafecff" Aug 13 00:02:53.072119 kubelet[2295]: I0813 00:02:53.072094 2295 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4df0f1f4127430bd5c4a089f69a4e069-kubeconfig\") pod \"kube-scheduler-ci-3510.3.8-a-af9fafecff\" (UID: \"4df0f1f4127430bd5c4a089f69a4e069\") " pod="kube-system/kube-scheduler-ci-3510.3.8-a-af9fafecff" Aug 13 00:02:53.072167 kubelet[2295]: I0813 00:02:53.072132 2295 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4159a08b53cb8b501acb31466a2d8cba-ca-certs\") pod \"kube-apiserver-ci-3510.3.8-a-af9fafecff\" (UID: \"4159a08b53cb8b501acb31466a2d8cba\") " pod="kube-system/kube-apiserver-ci-3510.3.8-a-af9fafecff" Aug 13 00:02:53.072167 kubelet[2295]: I0813 00:02:53.072154 2295 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4159a08b53cb8b501acb31466a2d8cba-k8s-certs\") pod \"kube-apiserver-ci-3510.3.8-a-af9fafecff\" (UID: \"4159a08b53cb8b501acb31466a2d8cba\") " pod="kube-system/kube-apiserver-ci-3510.3.8-a-af9fafecff" Aug 13 00:02:53.189150 env[1581]: time="2025-08-13T00:02:53.188562868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.8-a-af9fafecff,Uid:4159a08b53cb8b501acb31466a2d8cba,Namespace:kube-system,Attempt:0,}" Aug 13 00:02:53.189150 env[1581]: time="2025-08-13T00:02:53.188975812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.8-a-af9fafecff,Uid:35b286d0d80660a85c1620c1f87a84cb,Namespace:kube-system,Attempt:0,}" Aug 13 00:02:53.190755 env[1581]: time="2025-08-13T00:02:53.190696467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.8-a-af9fafecff,Uid:4df0f1f4127430bd5c4a089f69a4e069,Namespace:kube-system,Attempt:0,}" Aug 13 00:02:53.271671 kubelet[2295]: E0813 00:02:53.271608 2295 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-a-af9fafecff?timeout=10s\": dial tcp 10.200.20.38:6443: connect: connection refused" interval="800ms" Aug 13 00:02:53.464271 kubelet[2295]: I0813 00:02:53.463833 2295 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-a-af9fafecff" Aug 13 00:02:53.464271 kubelet[2295]: E0813 00:02:53.464174 2295 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.38:6443/api/v1/nodes\": dial tcp 10.200.20.38:6443: connect: connection refused" node="ci-3510.3.8-a-af9fafecff" Aug 13 00:02:53.619975 kubelet[2295]: W0813 00:02:53.619862 2295 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.38:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.38:6443: connect: connection refused Aug 13 00:02:53.619975 kubelet[2295]: E0813 00:02:53.619944 2295 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.38:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:02:53.655323 kubelet[2295]: W0813 00:02:53.655239 2295 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.38:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.38:6443: connect: connection refused Aug 13 00:02:53.655323 kubelet[2295]: E0813 00:02:53.655292 2295 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.38:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:02:53.822633 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1171543239.mount: Deactivated successfully. Aug 13 00:02:53.838724 env[1581]: time="2025-08-13T00:02:53.838673703Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:02:53.848304 env[1581]: time="2025-08-13T00:02:53.848259502Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:02:53.855789 env[1581]: time="2025-08-13T00:02:53.855748539Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:02:53.858218 env[1581]: time="2025-08-13T00:02:53.858183168Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:02:53.861996 env[1581]: time="2025-08-13T00:02:53.861957106Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:02:53.867072 env[1581]: time="2025-08-13T00:02:53.867011315Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:02:53.871045 env[1581]: time="2025-08-13T00:02:53.871008085Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:02:53.873797 env[1581]: time="2025-08-13T00:02:53.873751381Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:02:53.878592 env[1581]: time="2025-08-13T00:02:53.878546441Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:02:53.886627 env[1581]: time="2025-08-13T00:02:53.886574378Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:02:53.888326 env[1581]: time="2025-08-13T00:02:53.888289034Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:02:53.891786 env[1581]: time="2025-08-13T00:02:53.891740264Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:02:53.914623 env[1581]: time="2025-08-13T00:02:53.914241936Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:02:53.914623 env[1581]: time="2025-08-13T00:02:53.914280975Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:02:53.914623 env[1581]: time="2025-08-13T00:02:53.914290975Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:02:53.914623 env[1581]: time="2025-08-13T00:02:53.914514566Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e53ac796a034195de7dd85330c796c049ae26fb343481a4343fa01d1cfbab358 pid=2334 runtime=io.containerd.runc.v2 Aug 13 00:02:53.920522 kubelet[2295]: W0813 00:02:53.920392 2295 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.38:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-a-af9fafecff&limit=500&resourceVersion=0": dial tcp 10.200.20.38:6443: connect: connection refused Aug 13 00:02:53.920522 kubelet[2295]: E0813 00:02:53.920457 2295 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.38:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-a-af9fafecff&limit=500&resourceVersion=0\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:02:53.952393 env[1581]: time="2025-08-13T00:02:53.949334415Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:02:53.952393 env[1581]: time="2025-08-13T00:02:53.949426451Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:02:53.952393 env[1581]: time="2025-08-13T00:02:53.949452130Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:02:53.952393 env[1581]: time="2025-08-13T00:02:53.949600565Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/da0e3e82708ba048f3558c1f42abc6163aa690e924e95b578af4c08a7338f6af pid=2373 runtime=io.containerd.runc.v2 Aug 13 00:02:53.980078 env[1581]: time="2025-08-13T00:02:53.978839624Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:02:53.980078 env[1581]: time="2025-08-13T00:02:53.978894541Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:02:53.980078 env[1581]: time="2025-08-13T00:02:53.978905621Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:02:53.980078 env[1581]: time="2025-08-13T00:02:53.979180691Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0f21aabc7f867de4ddc22a53d6b8d828a3de891e8115887578979868124352ad pid=2414 runtime=io.containerd.runc.v2 Aug 13 00:02:53.981531 env[1581]: time="2025-08-13T00:02:53.981488164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.8-a-af9fafecff,Uid:4159a08b53cb8b501acb31466a2d8cba,Namespace:kube-system,Attempt:0,} returns sandbox id \"e53ac796a034195de7dd85330c796c049ae26fb343481a4343fa01d1cfbab358\"" Aug 13 00:02:53.985229 env[1581]: time="2025-08-13T00:02:53.985191984Z" level=info msg="CreateContainer within sandbox \"e53ac796a034195de7dd85330c796c049ae26fb343481a4343fa01d1cfbab358\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 13 00:02:54.004864 env[1581]: time="2025-08-13T00:02:54.004814088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.8-a-af9fafecff,Uid:4df0f1f4127430bd5c4a089f69a4e069,Namespace:kube-system,Attempt:0,} returns sandbox id \"da0e3e82708ba048f3558c1f42abc6163aa690e924e95b578af4c08a7338f6af\"" Aug 13 00:02:54.007373 env[1581]: time="2025-08-13T00:02:54.007336516Z" level=info msg="CreateContainer within sandbox \"da0e3e82708ba048f3558c1f42abc6163aa690e924e95b578af4c08a7338f6af\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 13 00:02:54.043106 env[1581]: time="2025-08-13T00:02:54.043064727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.8-a-af9fafecff,Uid:35b286d0d80660a85c1620c1f87a84cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"0f21aabc7f867de4ddc22a53d6b8d828a3de891e8115887578979868124352ad\"" Aug 13 00:02:54.046015 env[1581]: time="2025-08-13T00:02:54.045970661Z" level=info msg="CreateContainer within sandbox \"0f21aabc7f867de4ddc22a53d6b8d828a3de891e8115887578979868124352ad\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 13 00:02:54.072522 kubelet[2295]: E0813 00:02:54.072472 2295 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-a-af9fafecff?timeout=10s\": dial tcp 10.200.20.38:6443: connect: connection refused" interval="1.6s" Aug 13 00:02:54.088979 kubelet[2295]: W0813 00:02:54.088141 2295 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.38:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.38:6443: connect: connection refused Aug 13 00:02:54.088979 kubelet[2295]: E0813 00:02:54.088207 2295 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.38:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:02:54.146139 env[1581]: time="2025-08-13T00:02:54.146063274Z" level=info msg="CreateContainer within sandbox \"e53ac796a034195de7dd85330c796c049ae26fb343481a4343fa01d1cfbab358\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4270f471edcaf1f5533a8b6cc8f856f3f4984109f020409d15f85f9f4ebc0aab\"" Aug 13 00:02:54.147038 env[1581]: time="2025-08-13T00:02:54.147009199Z" level=info msg="StartContainer for \"4270f471edcaf1f5533a8b6cc8f856f3f4984109f020409d15f85f9f4ebc0aab\"" Aug 13 00:02:54.172318 env[1581]: time="2025-08-13T00:02:54.172257914Z" level=info msg="CreateContainer within sandbox \"da0e3e82708ba048f3558c1f42abc6163aa690e924e95b578af4c08a7338f6af\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a9f52ef6440671287a605ef32e21e1c89abccea1c69b898c04b76625bf0ecd04\"" Aug 13 00:02:54.173078 env[1581]: time="2025-08-13T00:02:54.173031846Z" level=info msg="StartContainer for \"a9f52ef6440671287a605ef32e21e1c89abccea1c69b898c04b76625bf0ecd04\"" Aug 13 00:02:54.181334 env[1581]: time="2025-08-13T00:02:54.181277224Z" level=info msg="CreateContainer within sandbox \"0f21aabc7f867de4ddc22a53d6b8d828a3de891e8115887578979868124352ad\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"442353550a11afcb80e4e0c4a5b4ee52af6b807333aa0c1b18ca2a188795c623\"" Aug 13 00:02:54.182236 env[1581]: time="2025-08-13T00:02:54.182197230Z" level=info msg="StartContainer for \"442353550a11afcb80e4e0c4a5b4ee52af6b807333aa0c1b18ca2a188795c623\"" Aug 13 00:02:54.232991 env[1581]: time="2025-08-13T00:02:54.232082763Z" level=info msg="StartContainer for \"4270f471edcaf1f5533a8b6cc8f856f3f4984109f020409d15f85f9f4ebc0aab\" returns successfully" Aug 13 00:02:54.266510 kubelet[2295]: I0813 00:02:54.266479 2295 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-a-af9fafecff" Aug 13 00:02:54.266897 kubelet[2295]: E0813 00:02:54.266840 2295 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.38:6443/api/v1/nodes\": dial tcp 10.200.20.38:6443: connect: connection refused" node="ci-3510.3.8-a-af9fafecff" Aug 13 00:02:54.267108 env[1581]: time="2025-08-13T00:02:54.267064401Z" level=info msg="StartContainer for \"a9f52ef6440671287a605ef32e21e1c89abccea1c69b898c04b76625bf0ecd04\" returns successfully" Aug 13 00:02:54.311069 env[1581]: time="2025-08-13T00:02:54.310993912Z" level=info msg="StartContainer for \"442353550a11afcb80e4e0c4a5b4ee52af6b807333aa0c1b18ca2a188795c623\" returns successfully" Aug 13 00:02:55.868626 kubelet[2295]: I0813 00:02:55.868597 2295 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-a-af9fafecff" Aug 13 00:02:57.184654 kubelet[2295]: E0813 00:02:57.184607 2295 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.8-a-af9fafecff\" not found" node="ci-3510.3.8-a-af9fafecff" Aug 13 00:02:57.365437 kubelet[2295]: I0813 00:02:57.365393 2295 kubelet_node_status.go:75] "Successfully registered node" node="ci-3510.3.8-a-af9fafecff" Aug 13 00:02:57.645661 kubelet[2295]: I0813 00:02:57.645623 2295 apiserver.go:52] "Watching apiserver" Aug 13 00:02:57.669928 kubelet[2295]: I0813 00:02:57.669891 2295 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 00:02:59.552787 systemd[1]: Reloading. Aug 13 00:02:59.646140 /usr/lib/systemd/system-generators/torcx-generator[2590]: time="2025-08-13T00:02:59Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Aug 13 00:02:59.646562 /usr/lib/systemd/system-generators/torcx-generator[2590]: time="2025-08-13T00:02:59Z" level=info msg="torcx already run" Aug 13 00:02:59.777512 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 00:02:59.777534 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 00:02:59.793497 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:02:59.892061 systemd[1]: Stopping kubelet.service... Aug 13 00:02:59.911513 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 00:02:59.911823 systemd[1]: Stopped kubelet.service. Aug 13 00:02:59.914349 systemd[1]: Starting kubelet.service... Aug 13 00:03:00.083937 systemd[1]: Started kubelet.service. Aug 13 00:03:00.149097 kubelet[2662]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:03:00.149430 kubelet[2662]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 00:03:00.149483 kubelet[2662]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:03:00.149640 kubelet[2662]: I0813 00:03:00.149609 2662 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 00:03:00.164532 kubelet[2662]: I0813 00:03:00.164494 2662 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 00:03:00.164532 kubelet[2662]: I0813 00:03:00.164524 2662 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 00:03:00.165633 kubelet[2662]: I0813 00:03:00.165598 2662 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 00:03:00.168983 kubelet[2662]: I0813 00:03:00.168370 2662 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 13 00:03:00.171680 kubelet[2662]: I0813 00:03:00.171642 2662 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:03:00.178340 kubelet[2662]: E0813 00:03:00.178296 2662 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 00:03:00.178340 kubelet[2662]: I0813 00:03:00.178337 2662 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 00:03:00.180976 kubelet[2662]: I0813 00:03:00.180954 2662 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 00:03:00.181338 kubelet[2662]: I0813 00:03:00.181319 2662 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 00:03:00.181449 kubelet[2662]: I0813 00:03:00.181417 2662 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 00:03:00.181628 kubelet[2662]: I0813 00:03:00.181447 2662 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.8-a-af9fafecff","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Aug 13 00:03:00.181706 kubelet[2662]: I0813 00:03:00.181633 2662 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 00:03:00.181706 kubelet[2662]: I0813 00:03:00.181642 2662 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 00:03:00.181706 kubelet[2662]: I0813 00:03:00.181674 2662 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:03:00.181791 kubelet[2662]: I0813 00:03:00.181764 2662 kubelet.go:408] "Attempting to sync node with API server" Aug 13 00:03:00.181791 kubelet[2662]: I0813 00:03:00.181776 2662 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 00:03:00.181838 kubelet[2662]: I0813 00:03:00.181794 2662 kubelet.go:314] "Adding apiserver pod source" Aug 13 00:03:00.181838 kubelet[2662]: I0813 00:03:00.181808 2662 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 00:03:00.182988 kubelet[2662]: I0813 00:03:00.182958 2662 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Aug 13 00:03:00.183466 kubelet[2662]: I0813 00:03:00.183436 2662 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 00:03:00.183925 kubelet[2662]: I0813 00:03:00.183907 2662 server.go:1274] "Started kubelet" Aug 13 00:03:00.203635 kubelet[2662]: I0813 00:03:00.203468 2662 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 00:03:00.216420 kubelet[2662]: I0813 00:03:00.216393 2662 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 00:03:00.216701 kubelet[2662]: I0813 00:03:00.216671 2662 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 00:03:00.218428 kubelet[2662]: I0813 00:03:00.218398 2662 server.go:449] "Adding debug handlers to kubelet server" Aug 13 00:03:00.219612 kubelet[2662]: I0813 00:03:00.219325 2662 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 00:03:00.219902 kubelet[2662]: E0813 00:03:00.219845 2662 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.8-a-af9fafecff\" not found" Aug 13 00:03:00.221726 kubelet[2662]: I0813 00:03:00.221689 2662 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 00:03:00.221855 kubelet[2662]: I0813 00:03:00.221838 2662 reconciler.go:26] "Reconciler: start to sync state" Aug 13 00:03:00.222124 kubelet[2662]: I0813 00:03:00.222078 2662 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 00:03:00.222529 kubelet[2662]: I0813 00:03:00.222511 2662 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 00:03:00.229325 kubelet[2662]: E0813 00:03:00.229281 2662 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 00:03:00.235221 kubelet[2662]: I0813 00:03:00.232490 2662 factory.go:221] Registration of the containerd container factory successfully Aug 13 00:03:00.235221 kubelet[2662]: I0813 00:03:00.232505 2662 factory.go:221] Registration of the systemd container factory successfully Aug 13 00:03:00.235221 kubelet[2662]: I0813 00:03:00.232585 2662 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 00:03:00.244334 kubelet[2662]: I0813 00:03:00.244300 2662 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 00:03:00.245266 kubelet[2662]: I0813 00:03:00.245250 2662 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 00:03:00.245362 kubelet[2662]: I0813 00:03:00.245352 2662 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 00:03:00.245435 kubelet[2662]: I0813 00:03:00.245426 2662 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 00:03:00.245537 kubelet[2662]: E0813 00:03:00.245520 2662 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 00:03:00.300475 kubelet[2662]: I0813 00:03:00.300443 2662 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 00:03:00.300475 kubelet[2662]: I0813 00:03:00.300465 2662 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 00:03:00.300475 kubelet[2662]: I0813 00:03:00.300486 2662 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:03:00.300662 kubelet[2662]: I0813 00:03:00.300639 2662 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 13 00:03:00.300662 kubelet[2662]: I0813 00:03:00.300649 2662 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 13 00:03:00.300728 kubelet[2662]: I0813 00:03:00.300669 2662 policy_none.go:49] "None policy: Start" Aug 13 00:03:00.301534 kubelet[2662]: I0813 00:03:00.301511 2662 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 00:03:00.301599 kubelet[2662]: I0813 00:03:00.301540 2662 state_mem.go:35] "Initializing new in-memory state store" Aug 13 00:03:00.301709 kubelet[2662]: I0813 00:03:00.301691 2662 state_mem.go:75] "Updated machine memory state" Aug 13 00:03:00.302927 kubelet[2662]: I0813 00:03:00.302905 2662 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 00:03:00.303096 kubelet[2662]: I0813 00:03:00.303073 2662 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 00:03:00.303143 kubelet[2662]: I0813 00:03:00.303089 2662 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 00:03:00.304908 kubelet[2662]: I0813 00:03:00.304053 2662 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 00:03:00.359254 kubelet[2662]: W0813 00:03:00.359219 2662 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 00:03:00.360551 kubelet[2662]: W0813 00:03:00.360531 2662 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 00:03:00.360845 kubelet[2662]: W0813 00:03:00.360579 2662 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 00:03:00.406252 kubelet[2662]: I0813 00:03:00.406146 2662 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.8-a-af9fafecff" Aug 13 00:03:00.422688 kubelet[2662]: I0813 00:03:00.422646 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4159a08b53cb8b501acb31466a2d8cba-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.8-a-af9fafecff\" (UID: \"4159a08b53cb8b501acb31466a2d8cba\") " pod="kube-system/kube-apiserver-ci-3510.3.8-a-af9fafecff" Aug 13 00:03:00.422688 kubelet[2662]: I0813 00:03:00.422685 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4df0f1f4127430bd5c4a089f69a4e069-kubeconfig\") pod \"kube-scheduler-ci-3510.3.8-a-af9fafecff\" (UID: \"4df0f1f4127430bd5c4a089f69a4e069\") " pod="kube-system/kube-scheduler-ci-3510.3.8-a-af9fafecff" Aug 13 00:03:00.422861 kubelet[2662]: I0813 00:03:00.422707 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/35b286d0d80660a85c1620c1f87a84cb-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.8-a-af9fafecff\" (UID: \"35b286d0d80660a85c1620c1f87a84cb\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-a-af9fafecff" Aug 13 00:03:00.422861 kubelet[2662]: I0813 00:03:00.422723 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4159a08b53cb8b501acb31466a2d8cba-ca-certs\") pod \"kube-apiserver-ci-3510.3.8-a-af9fafecff\" (UID: \"4159a08b53cb8b501acb31466a2d8cba\") " pod="kube-system/kube-apiserver-ci-3510.3.8-a-af9fafecff" Aug 13 00:03:00.422861 kubelet[2662]: I0813 00:03:00.422738 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4159a08b53cb8b501acb31466a2d8cba-k8s-certs\") pod \"kube-apiserver-ci-3510.3.8-a-af9fafecff\" (UID: \"4159a08b53cb8b501acb31466a2d8cba\") " pod="kube-system/kube-apiserver-ci-3510.3.8-a-af9fafecff" Aug 13 00:03:00.422861 kubelet[2662]: I0813 00:03:00.422753 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/35b286d0d80660a85c1620c1f87a84cb-ca-certs\") pod \"kube-controller-manager-ci-3510.3.8-a-af9fafecff\" (UID: \"35b286d0d80660a85c1620c1f87a84cb\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-a-af9fafecff" Aug 13 00:03:00.422861 kubelet[2662]: I0813 00:03:00.422770 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/35b286d0d80660a85c1620c1f87a84cb-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.8-a-af9fafecff\" (UID: \"35b286d0d80660a85c1620c1f87a84cb\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-a-af9fafecff" Aug 13 00:03:00.423006 kubelet[2662]: I0813 00:03:00.422784 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/35b286d0d80660a85c1620c1f87a84cb-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.8-a-af9fafecff\" (UID: \"35b286d0d80660a85c1620c1f87a84cb\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-a-af9fafecff" Aug 13 00:03:00.423006 kubelet[2662]: I0813 00:03:00.422801 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/35b286d0d80660a85c1620c1f87a84cb-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.8-a-af9fafecff\" (UID: \"35b286d0d80660a85c1620c1f87a84cb\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-a-af9fafecff" Aug 13 00:03:00.423351 kubelet[2662]: I0813 00:03:00.423328 2662 kubelet_node_status.go:111] "Node was previously registered" node="ci-3510.3.8-a-af9fafecff" Aug 13 00:03:00.423485 kubelet[2662]: I0813 00:03:00.423476 2662 kubelet_node_status.go:75] "Successfully registered node" node="ci-3510.3.8-a-af9fafecff" Aug 13 00:03:00.613845 sudo[2693]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Aug 13 00:03:00.614442 sudo[2693]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Aug 13 00:03:01.138474 sudo[2693]: pam_unix(sudo:session): session closed for user root Aug 13 00:03:01.182419 kubelet[2662]: I0813 00:03:01.182378 2662 apiserver.go:52] "Watching apiserver" Aug 13 00:03:01.222003 kubelet[2662]: I0813 00:03:01.221968 2662 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 00:03:01.307138 kubelet[2662]: I0813 00:03:01.307078 2662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.8-a-af9fafecff" podStartSLOduration=1.30706301 podStartE2EDuration="1.30706301s" podCreationTimestamp="2025-08-13 00:03:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:03:01.306859816 +0000 UTC m=+1.210887407" watchObservedRunningTime="2025-08-13 00:03:01.30706301 +0000 UTC m=+1.211090561" Aug 13 00:03:01.307330 kubelet[2662]: I0813 00:03:01.307198 2662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.8-a-af9fafecff" podStartSLOduration=1.307193526 podStartE2EDuration="1.307193526s" podCreationTimestamp="2025-08-13 00:03:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:03:01.289762253 +0000 UTC m=+1.193789844" watchObservedRunningTime="2025-08-13 00:03:01.307193526 +0000 UTC m=+1.211221117" Aug 13 00:03:01.318176 kubelet[2662]: I0813 00:03:01.318114 2662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.8-a-af9fafecff" podStartSLOduration=1.318100196 podStartE2EDuration="1.318100196s" podCreationTimestamp="2025-08-13 00:03:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:03:01.31760633 +0000 UTC m=+1.221633921" watchObservedRunningTime="2025-08-13 00:03:01.318100196 +0000 UTC m=+1.222127787" Aug 13 00:03:02.958019 sudo[1929]: pam_unix(sudo:session): session closed for user root Aug 13 00:03:03.032506 sshd[1925]: pam_unix(sshd:session): session closed for user core Aug 13 00:03:03.035801 systemd-logind[1562]: Session 7 logged out. Waiting for processes to exit. Aug 13 00:03:03.036146 systemd[1]: sshd@4-10.200.20.38:22-10.200.16.10:34998.service: Deactivated successfully. Aug 13 00:03:03.037021 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 00:03:03.038305 systemd-logind[1562]: Removed session 7. Aug 13 00:03:04.968344 kubelet[2662]: I0813 00:03:04.968314 2662 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 13 00:03:04.969276 env[1581]: time="2025-08-13T00:03:04.969231149Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 00:03:04.970042 kubelet[2662]: I0813 00:03:04.970024 2662 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 13 00:03:06.055493 kubelet[2662]: I0813 00:03:06.055270 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f7607543-368e-4809-997d-75a32727f91e-cilium-run\") pod \"cilium-4s9cw\" (UID: \"f7607543-368e-4809-997d-75a32727f91e\") " pod="kube-system/cilium-4s9cw" Aug 13 00:03:06.055910 kubelet[2662]: I0813 00:03:06.055869 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f7607543-368e-4809-997d-75a32727f91e-cilium-cgroup\") pod \"cilium-4s9cw\" (UID: \"f7607543-368e-4809-997d-75a32727f91e\") " pod="kube-system/cilium-4s9cw" Aug 13 00:03:06.055991 kubelet[2662]: I0813 00:03:06.055979 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f7607543-368e-4809-997d-75a32727f91e-cni-path\") pod \"cilium-4s9cw\" (UID: \"f7607543-368e-4809-997d-75a32727f91e\") " pod="kube-system/cilium-4s9cw" Aug 13 00:03:06.056064 kubelet[2662]: I0813 00:03:06.056052 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f7607543-368e-4809-997d-75a32727f91e-lib-modules\") pod \"cilium-4s9cw\" (UID: \"f7607543-368e-4809-997d-75a32727f91e\") " pod="kube-system/cilium-4s9cw" Aug 13 00:03:06.056139 kubelet[2662]: I0813 00:03:06.056128 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f7607543-368e-4809-997d-75a32727f91e-hubble-tls\") pod \"cilium-4s9cw\" (UID: \"f7607543-368e-4809-997d-75a32727f91e\") " pod="kube-system/cilium-4s9cw" Aug 13 00:03:06.056211 kubelet[2662]: I0813 00:03:06.056199 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1c9ad301-0bf1-43b1-8346-28cebc3725f8-xtables-lock\") pod \"kube-proxy-8ftch\" (UID: \"1c9ad301-0bf1-43b1-8346-28cebc3725f8\") " pod="kube-system/kube-proxy-8ftch" Aug 13 00:03:06.056283 kubelet[2662]: I0813 00:03:06.056269 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f7607543-368e-4809-997d-75a32727f91e-host-proc-sys-net\") pod \"cilium-4s9cw\" (UID: \"f7607543-368e-4809-997d-75a32727f91e\") " pod="kube-system/cilium-4s9cw" Aug 13 00:03:06.056348 kubelet[2662]: I0813 00:03:06.056337 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f7607543-368e-4809-997d-75a32727f91e-hostproc\") pod \"cilium-4s9cw\" (UID: \"f7607543-368e-4809-997d-75a32727f91e\") " pod="kube-system/cilium-4s9cw" Aug 13 00:03:06.056416 kubelet[2662]: I0813 00:03:06.056404 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f7607543-368e-4809-997d-75a32727f91e-host-proc-sys-kernel\") pod \"cilium-4s9cw\" (UID: \"f7607543-368e-4809-997d-75a32727f91e\") " pod="kube-system/cilium-4s9cw" Aug 13 00:03:06.056489 kubelet[2662]: I0813 00:03:06.056476 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f7607543-368e-4809-997d-75a32727f91e-cilium-config-path\") pod \"cilium-4s9cw\" (UID: \"f7607543-368e-4809-997d-75a32727f91e\") " pod="kube-system/cilium-4s9cw" Aug 13 00:03:06.056557 kubelet[2662]: I0813 00:03:06.056546 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f7607543-368e-4809-997d-75a32727f91e-bpf-maps\") pod \"cilium-4s9cw\" (UID: \"f7607543-368e-4809-997d-75a32727f91e\") " pod="kube-system/cilium-4s9cw" Aug 13 00:03:06.056627 kubelet[2662]: I0813 00:03:06.056615 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1c9ad301-0bf1-43b1-8346-28cebc3725f8-lib-modules\") pod \"kube-proxy-8ftch\" (UID: \"1c9ad301-0bf1-43b1-8346-28cebc3725f8\") " pod="kube-system/kube-proxy-8ftch" Aug 13 00:03:06.056694 kubelet[2662]: I0813 00:03:06.056682 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f7607543-368e-4809-997d-75a32727f91e-etc-cni-netd\") pod \"cilium-4s9cw\" (UID: \"f7607543-368e-4809-997d-75a32727f91e\") " pod="kube-system/cilium-4s9cw" Aug 13 00:03:06.056758 kubelet[2662]: I0813 00:03:06.056747 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f7607543-368e-4809-997d-75a32727f91e-xtables-lock\") pod \"cilium-4s9cw\" (UID: \"f7607543-368e-4809-997d-75a32727f91e\") " pod="kube-system/cilium-4s9cw" Aug 13 00:03:06.056832 kubelet[2662]: I0813 00:03:06.056820 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1c9ad301-0bf1-43b1-8346-28cebc3725f8-kube-proxy\") pod \"kube-proxy-8ftch\" (UID: \"1c9ad301-0bf1-43b1-8346-28cebc3725f8\") " pod="kube-system/kube-proxy-8ftch" Aug 13 00:03:06.056927 kubelet[2662]: I0813 00:03:06.056909 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmtnr\" (UniqueName: \"kubernetes.io/projected/1c9ad301-0bf1-43b1-8346-28cebc3725f8-kube-api-access-nmtnr\") pod \"kube-proxy-8ftch\" (UID: \"1c9ad301-0bf1-43b1-8346-28cebc3725f8\") " pod="kube-system/kube-proxy-8ftch" Aug 13 00:03:06.056998 kubelet[2662]: I0813 00:03:06.056987 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f7607543-368e-4809-997d-75a32727f91e-clustermesh-secrets\") pod \"cilium-4s9cw\" (UID: \"f7607543-368e-4809-997d-75a32727f91e\") " pod="kube-system/cilium-4s9cw" Aug 13 00:03:06.057070 kubelet[2662]: I0813 00:03:06.057059 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5sln\" (UniqueName: \"kubernetes.io/projected/f7607543-368e-4809-997d-75a32727f91e-kube-api-access-w5sln\") pod \"cilium-4s9cw\" (UID: \"f7607543-368e-4809-997d-75a32727f91e\") " pod="kube-system/cilium-4s9cw" Aug 13 00:03:06.157769 kubelet[2662]: I0813 00:03:06.157725 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fgs5\" (UniqueName: \"kubernetes.io/projected/724fbe26-40f5-4a64-8cdb-b7ada888e4cf-kube-api-access-7fgs5\") pod \"cilium-operator-5d85765b45-94b9j\" (UID: \"724fbe26-40f5-4a64-8cdb-b7ada888e4cf\") " pod="kube-system/cilium-operator-5d85765b45-94b9j" Aug 13 00:03:06.157958 kubelet[2662]: I0813 00:03:06.157902 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/724fbe26-40f5-4a64-8cdb-b7ada888e4cf-cilium-config-path\") pod \"cilium-operator-5d85765b45-94b9j\" (UID: \"724fbe26-40f5-4a64-8cdb-b7ada888e4cf\") " pod="kube-system/cilium-operator-5d85765b45-94b9j" Aug 13 00:03:06.158729 kubelet[2662]: I0813 00:03:06.158697 2662 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Aug 13 00:03:06.265557 env[1581]: time="2025-08-13T00:03:06.265505626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4s9cw,Uid:f7607543-368e-4809-997d-75a32727f91e,Namespace:kube-system,Attempt:0,}" Aug 13 00:03:06.273371 env[1581]: time="2025-08-13T00:03:06.273323979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8ftch,Uid:1c9ad301-0bf1-43b1-8346-28cebc3725f8,Namespace:kube-system,Attempt:0,}" Aug 13 00:03:06.314189 env[1581]: time="2025-08-13T00:03:06.312127830Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:03:06.314189 env[1581]: time="2025-08-13T00:03:06.312184709Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:03:06.314189 env[1581]: time="2025-08-13T00:03:06.312196508Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:03:06.314189 env[1581]: time="2025-08-13T00:03:06.312464021Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/66d143a9e7d48f0dc45c3a7ff878714c37e004c98254e2d1fa0e474b0b97383c pid=2761 runtime=io.containerd.runc.v2 Aug 13 00:03:06.314990 env[1581]: time="2025-08-13T00:03:06.306364703Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:03:06.314990 env[1581]: time="2025-08-13T00:03:06.306401702Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:03:06.314990 env[1581]: time="2025-08-13T00:03:06.306411582Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:03:06.314990 env[1581]: time="2025-08-13T00:03:06.306577417Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/dd4c7dcb94ea4b60d3e1a1885be8be5ac1379c28fed610c23ece2d3a10da5e38 pid=2743 runtime=io.containerd.runc.v2 Aug 13 00:03:06.374595 env[1581]: time="2025-08-13T00:03:06.374548216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4s9cw,Uid:f7607543-368e-4809-997d-75a32727f91e,Namespace:kube-system,Attempt:0,} returns sandbox id \"dd4c7dcb94ea4b60d3e1a1885be8be5ac1379c28fed610c23ece2d3a10da5e38\"" Aug 13 00:03:06.377481 env[1581]: time="2025-08-13T00:03:06.376604761Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Aug 13 00:03:06.379019 env[1581]: time="2025-08-13T00:03:06.378969098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8ftch,Uid:1c9ad301-0bf1-43b1-8346-28cebc3725f8,Namespace:kube-system,Attempt:0,} returns sandbox id \"66d143a9e7d48f0dc45c3a7ff878714c37e004c98254e2d1fa0e474b0b97383c\"" Aug 13 00:03:06.381591 env[1581]: time="2025-08-13T00:03:06.381549710Z" level=info msg="CreateContainer within sandbox \"66d143a9e7d48f0dc45c3a7ff878714c37e004c98254e2d1fa0e474b0b97383c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 00:03:06.417614 env[1581]: time="2025-08-13T00:03:06.417561235Z" level=info msg="CreateContainer within sandbox \"66d143a9e7d48f0dc45c3a7ff878714c37e004c98254e2d1fa0e474b0b97383c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a707417d77c09861839375f5b427db75360a4d0f824c3a4edcd7c8355677bbfe\"" Aug 13 00:03:06.419000 env[1581]: time="2025-08-13T00:03:06.418394373Z" level=info msg="StartContainer for \"a707417d77c09861839375f5b427db75360a4d0f824c3a4edcd7c8355677bbfe\"" Aug 13 00:03:06.426615 env[1581]: time="2025-08-13T00:03:06.426566437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-94b9j,Uid:724fbe26-40f5-4a64-8cdb-b7ada888e4cf,Namespace:kube-system,Attempt:0,}" Aug 13 00:03:06.464390 env[1581]: time="2025-08-13T00:03:06.464314756Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:03:06.464505 env[1581]: time="2025-08-13T00:03:06.464397874Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:03:06.464505 env[1581]: time="2025-08-13T00:03:06.464429593Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:03:06.464607 env[1581]: time="2025-08-13T00:03:06.464572389Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/02ddcb7a190243947389ad26948f292405dc4244ab9054d40b23d154a87115c2 pid=2852 runtime=io.containerd.runc.v2 Aug 13 00:03:06.484143 env[1581]: time="2025-08-13T00:03:06.484092672Z" level=info msg="StartContainer for \"a707417d77c09861839375f5b427db75360a4d0f824c3a4edcd7c8355677bbfe\" returns successfully" Aug 13 00:03:06.532677 env[1581]: time="2025-08-13T00:03:06.532627705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-94b9j,Uid:724fbe26-40f5-4a64-8cdb-b7ada888e4cf,Namespace:kube-system,Attempt:0,} returns sandbox id \"02ddcb7a190243947389ad26948f292405dc4244ab9054d40b23d154a87115c2\"" Aug 13 00:03:07.303211 kubelet[2662]: I0813 00:03:07.303152 2662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8ftch" podStartSLOduration=2.303133952 podStartE2EDuration="2.303133952s" podCreationTimestamp="2025-08-13 00:03:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:03:07.302695883 +0000 UTC m=+7.206723434" watchObservedRunningTime="2025-08-13 00:03:07.303133952 +0000 UTC m=+7.207161543" Aug 13 00:03:12.584430 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3725043003.mount: Deactivated successfully. Aug 13 00:03:15.011563 env[1581]: time="2025-08-13T00:03:15.011510507Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:03:15.016569 env[1581]: time="2025-08-13T00:03:15.016529481Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:03:15.020494 env[1581]: time="2025-08-13T00:03:15.020444878Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:03:15.021173 env[1581]: time="2025-08-13T00:03:15.021140064Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Aug 13 00:03:15.024304 env[1581]: time="2025-08-13T00:03:15.023421496Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Aug 13 00:03:15.024808 env[1581]: time="2025-08-13T00:03:15.024772547Z" level=info msg="CreateContainer within sandbox \"dd4c7dcb94ea4b60d3e1a1885be8be5ac1379c28fed610c23ece2d3a10da5e38\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 00:03:15.062903 env[1581]: time="2025-08-13T00:03:15.062847104Z" level=info msg="CreateContainer within sandbox \"dd4c7dcb94ea4b60d3e1a1885be8be5ac1379c28fed610c23ece2d3a10da5e38\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b03a11575b7e67e7934a539839601f98ab8ee9dbabd0ef797e7d05a7c9cb7dde\"" Aug 13 00:03:15.069940 env[1581]: time="2025-08-13T00:03:15.069904196Z" level=info msg="StartContainer for \"b03a11575b7e67e7934a539839601f98ab8ee9dbabd0ef797e7d05a7c9cb7dde\"" Aug 13 00:03:15.127638 env[1581]: time="2025-08-13T00:03:15.127495621Z" level=info msg="StartContainer for \"b03a11575b7e67e7934a539839601f98ab8ee9dbabd0ef797e7d05a7c9cb7dde\" returns successfully" Aug 13 00:03:16.044963 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b03a11575b7e67e7934a539839601f98ab8ee9dbabd0ef797e7d05a7c9cb7dde-rootfs.mount: Deactivated successfully. Aug 13 00:03:16.922847 env[1581]: time="2025-08-13T00:03:16.922796442Z" level=info msg="shim disconnected" id=b03a11575b7e67e7934a539839601f98ab8ee9dbabd0ef797e7d05a7c9cb7dde Aug 13 00:03:16.923415 env[1581]: time="2025-08-13T00:03:16.923390669Z" level=warning msg="cleaning up after shim disconnected" id=b03a11575b7e67e7934a539839601f98ab8ee9dbabd0ef797e7d05a7c9cb7dde namespace=k8s.io Aug 13 00:03:16.923502 env[1581]: time="2025-08-13T00:03:16.923489587Z" level=info msg="cleaning up dead shim" Aug 13 00:03:16.931630 env[1581]: time="2025-08-13T00:03:16.931588261Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:03:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3075 runtime=io.containerd.runc.v2\n" Aug 13 00:03:17.317129 env[1581]: time="2025-08-13T00:03:17.316286582Z" level=info msg="CreateContainer within sandbox \"dd4c7dcb94ea4b60d3e1a1885be8be5ac1379c28fed610c23ece2d3a10da5e38\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 13 00:03:17.359375 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3276026045.mount: Deactivated successfully. Aug 13 00:03:17.366521 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3845911701.mount: Deactivated successfully. Aug 13 00:03:17.376002 env[1581]: time="2025-08-13T00:03:17.375939705Z" level=info msg="CreateContainer within sandbox \"dd4c7dcb94ea4b60d3e1a1885be8be5ac1379c28fed610c23ece2d3a10da5e38\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"7ef903890ccf524e6af921b149872b4be3376ceb1a6f70293a4918a3c51b9146\"" Aug 13 00:03:17.378185 env[1581]: time="2025-08-13T00:03:17.378109861Z" level=info msg="StartContainer for \"7ef903890ccf524e6af921b149872b4be3376ceb1a6f70293a4918a3c51b9146\"" Aug 13 00:03:17.443003 env[1581]: time="2025-08-13T00:03:17.442440610Z" level=info msg="StartContainer for \"7ef903890ccf524e6af921b149872b4be3376ceb1a6f70293a4918a3c51b9146\" returns successfully" Aug 13 00:03:17.443213 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 00:03:17.443455 systemd[1]: Stopped systemd-sysctl.service. Aug 13 00:03:17.445278 systemd[1]: Stopping systemd-sysctl.service... Aug 13 00:03:17.447457 systemd[1]: Starting systemd-sysctl.service... Aug 13 00:03:17.458655 systemd[1]: Finished systemd-sysctl.service. Aug 13 00:03:17.484062 env[1581]: time="2025-08-13T00:03:17.484007855Z" level=info msg="shim disconnected" id=7ef903890ccf524e6af921b149872b4be3376ceb1a6f70293a4918a3c51b9146 Aug 13 00:03:17.484062 env[1581]: time="2025-08-13T00:03:17.484057854Z" level=warning msg="cleaning up after shim disconnected" id=7ef903890ccf524e6af921b149872b4be3376ceb1a6f70293a4918a3c51b9146 namespace=k8s.io Aug 13 00:03:17.484062 env[1581]: time="2025-08-13T00:03:17.484068254Z" level=info msg="cleaning up dead shim" Aug 13 00:03:17.490763 env[1581]: time="2025-08-13T00:03:17.490713921Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:03:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3139 runtime=io.containerd.runc.v2\n" Aug 13 00:03:18.335197 env[1581]: time="2025-08-13T00:03:18.335147050Z" level=info msg="CreateContainer within sandbox \"dd4c7dcb94ea4b60d3e1a1885be8be5ac1379c28fed610c23ece2d3a10da5e38\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 13 00:03:18.358071 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7ef903890ccf524e6af921b149872b4be3376ceb1a6f70293a4918a3c51b9146-rootfs.mount: Deactivated successfully. Aug 13 00:03:18.429689 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2378439339.mount: Deactivated successfully. Aug 13 00:03:18.439729 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2203815337.mount: Deactivated successfully. Aug 13 00:03:18.454664 env[1581]: time="2025-08-13T00:03:18.454613149Z" level=info msg="CreateContainer within sandbox \"dd4c7dcb94ea4b60d3e1a1885be8be5ac1379c28fed610c23ece2d3a10da5e38\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8b4ca441f8441235fb98bbfdd7280ed1885e33d66d01571a682c2bd4aef42db9\"" Aug 13 00:03:18.456701 env[1581]: time="2025-08-13T00:03:18.455356134Z" level=info msg="StartContainer for \"8b4ca441f8441235fb98bbfdd7280ed1885e33d66d01571a682c2bd4aef42db9\"" Aug 13 00:03:18.529446 env[1581]: time="2025-08-13T00:03:18.529402123Z" level=info msg="StartContainer for \"8b4ca441f8441235fb98bbfdd7280ed1885e33d66d01571a682c2bd4aef42db9\" returns successfully" Aug 13 00:03:18.813001 env[1581]: time="2025-08-13T00:03:18.812945728Z" level=info msg="shim disconnected" id=8b4ca441f8441235fb98bbfdd7280ed1885e33d66d01571a682c2bd4aef42db9 Aug 13 00:03:18.813001 env[1581]: time="2025-08-13T00:03:18.812997007Z" level=warning msg="cleaning up after shim disconnected" id=8b4ca441f8441235fb98bbfdd7280ed1885e33d66d01571a682c2bd4aef42db9 namespace=k8s.io Aug 13 00:03:18.813001 env[1581]: time="2025-08-13T00:03:18.813010286Z" level=info msg="cleaning up dead shim" Aug 13 00:03:18.820448 env[1581]: time="2025-08-13T00:03:18.820392102Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:03:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3197 runtime=io.containerd.runc.v2\n" Aug 13 00:03:18.860773 env[1581]: time="2025-08-13T00:03:18.860722392Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:03:18.868482 env[1581]: time="2025-08-13T00:03:18.868413721Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:03:18.874533 env[1581]: time="2025-08-13T00:03:18.874489082Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:03:18.875168 env[1581]: time="2025-08-13T00:03:18.875132829Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Aug 13 00:03:18.879231 env[1581]: time="2025-08-13T00:03:18.879168910Z" level=info msg="CreateContainer within sandbox \"02ddcb7a190243947389ad26948f292405dc4244ab9054d40b23d154a87115c2\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Aug 13 00:03:18.906349 env[1581]: time="2025-08-13T00:03:18.906298939Z" level=info msg="CreateContainer within sandbox \"02ddcb7a190243947389ad26948f292405dc4244ab9054d40b23d154a87115c2\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"9d428362348a1400f27cc93761018bb80e9ee3b2effeb4286ece13c1731918ed\"" Aug 13 00:03:18.908664 env[1581]: time="2025-08-13T00:03:18.908003505Z" level=info msg="StartContainer for \"9d428362348a1400f27cc93761018bb80e9ee3b2effeb4286ece13c1731918ed\"" Aug 13 00:03:18.960056 env[1581]: time="2025-08-13T00:03:18.960000686Z" level=info msg="StartContainer for \"9d428362348a1400f27cc93761018bb80e9ee3b2effeb4286ece13c1731918ed\" returns successfully" Aug 13 00:03:19.321510 env[1581]: time="2025-08-13T00:03:19.321451673Z" level=info msg="CreateContainer within sandbox \"dd4c7dcb94ea4b60d3e1a1885be8be5ac1379c28fed610c23ece2d3a10da5e38\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 13 00:03:19.357160 env[1581]: time="2025-08-13T00:03:19.355381145Z" level=info msg="CreateContainer within sandbox \"dd4c7dcb94ea4b60d3e1a1885be8be5ac1379c28fed610c23ece2d3a10da5e38\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d3550e64cbd03f6059b6c961715f3010f0e57293ae1582610110477d72c2a8a3\"" Aug 13 00:03:19.357942 env[1581]: time="2025-08-13T00:03:19.357912936Z" level=info msg="StartContainer for \"d3550e64cbd03f6059b6c961715f3010f0e57293ae1582610110477d72c2a8a3\"" Aug 13 00:03:19.395052 kubelet[2662]: I0813 00:03:19.394625 2662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-94b9j" podStartSLOduration=1.052431332 podStartE2EDuration="13.394604554s" podCreationTimestamp="2025-08-13 00:03:06 +0000 UTC" firstStartedPulling="2025-08-13 00:03:06.534256342 +0000 UTC m=+6.438283933" lastFinishedPulling="2025-08-13 00:03:18.876429564 +0000 UTC m=+18.780457155" observedRunningTime="2025-08-13 00:03:19.348934708 +0000 UTC m=+19.252962299" watchObservedRunningTime="2025-08-13 00:03:19.394604554 +0000 UTC m=+19.298632145" Aug 13 00:03:19.488618 env[1581]: time="2025-08-13T00:03:19.488566157Z" level=info msg="StartContainer for \"d3550e64cbd03f6059b6c961715f3010f0e57293ae1582610110477d72c2a8a3\" returns successfully" Aug 13 00:03:19.520819 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d3550e64cbd03f6059b6c961715f3010f0e57293ae1582610110477d72c2a8a3-rootfs.mount: Deactivated successfully. Aug 13 00:03:19.543891 env[1581]: time="2025-08-13T00:03:19.543825300Z" level=info msg="shim disconnected" id=d3550e64cbd03f6059b6c961715f3010f0e57293ae1582610110477d72c2a8a3 Aug 13 00:03:19.543891 env[1581]: time="2025-08-13T00:03:19.543868899Z" level=warning msg="cleaning up after shim disconnected" id=d3550e64cbd03f6059b6c961715f3010f0e57293ae1582610110477d72c2a8a3 namespace=k8s.io Aug 13 00:03:19.543891 env[1581]: time="2025-08-13T00:03:19.543892979Z" level=info msg="cleaning up dead shim" Aug 13 00:03:19.556890 env[1581]: time="2025-08-13T00:03:19.556827651Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:03:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3287 runtime=io.containerd.runc.v2\n" Aug 13 00:03:20.326817 env[1581]: time="2025-08-13T00:03:20.326695192Z" level=info msg="CreateContainer within sandbox \"dd4c7dcb94ea4b60d3e1a1885be8be5ac1379c28fed610c23ece2d3a10da5e38\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 13 00:03:20.372771 env[1581]: time="2025-08-13T00:03:20.372717932Z" level=info msg="CreateContainer within sandbox \"dd4c7dcb94ea4b60d3e1a1885be8be5ac1379c28fed610c23ece2d3a10da5e38\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b02c2d127a1fd08e6dea3777ca93d3529ccf8e3a9edd59e62a35739c8b911c4c\"" Aug 13 00:03:20.373370 env[1581]: time="2025-08-13T00:03:20.373338761Z" level=info msg="StartContainer for \"b02c2d127a1fd08e6dea3777ca93d3529ccf8e3a9edd59e62a35739c8b911c4c\"" Aug 13 00:03:20.444122 env[1581]: time="2025-08-13T00:03:20.444068480Z" level=info msg="StartContainer for \"b02c2d127a1fd08e6dea3777ca93d3529ccf8e3a9edd59e62a35739c8b911c4c\" returns successfully" Aug 13 00:03:20.533900 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Aug 13 00:03:20.602854 kubelet[2662]: I0813 00:03:20.602821 2662 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Aug 13 00:03:20.759367 kubelet[2662]: I0813 00:03:20.759314 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xjvj\" (UniqueName: \"kubernetes.io/projected/43c3cef9-a1f3-48eb-b653-959a499cf628-kube-api-access-2xjvj\") pod \"coredns-7c65d6cfc9-7ngt6\" (UID: \"43c3cef9-a1f3-48eb-b653-959a499cf628\") " pod="kube-system/coredns-7c65d6cfc9-7ngt6" Aug 13 00:03:20.759367 kubelet[2662]: I0813 00:03:20.759373 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbvc4\" (UniqueName: \"kubernetes.io/projected/d627fc55-a83c-4ae4-81cb-86ea6dd784ec-kube-api-access-pbvc4\") pod \"coredns-7c65d6cfc9-vmclk\" (UID: \"d627fc55-a83c-4ae4-81cb-86ea6dd784ec\") " pod="kube-system/coredns-7c65d6cfc9-vmclk" Aug 13 00:03:20.759540 kubelet[2662]: I0813 00:03:20.759395 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d627fc55-a83c-4ae4-81cb-86ea6dd784ec-config-volume\") pod \"coredns-7c65d6cfc9-vmclk\" (UID: \"d627fc55-a83c-4ae4-81cb-86ea6dd784ec\") " pod="kube-system/coredns-7c65d6cfc9-vmclk" Aug 13 00:03:20.759540 kubelet[2662]: I0813 00:03:20.759414 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/43c3cef9-a1f3-48eb-b653-959a499cf628-config-volume\") pod \"coredns-7c65d6cfc9-7ngt6\" (UID: \"43c3cef9-a1f3-48eb-b653-959a499cf628\") " pod="kube-system/coredns-7c65d6cfc9-7ngt6" Aug 13 00:03:20.961732 env[1581]: time="2025-08-13T00:03:20.961619614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-vmclk,Uid:d627fc55-a83c-4ae4-81cb-86ea6dd784ec,Namespace:kube-system,Attempt:0,}" Aug 13 00:03:20.973913 env[1581]: time="2025-08-13T00:03:20.973842505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-7ngt6,Uid:43c3cef9-a1f3-48eb-b653-959a499cf628,Namespace:kube-system,Attempt:0,}" Aug 13 00:03:21.188904 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Aug 13 00:03:21.364856 systemd[1]: run-containerd-runc-k8s.io-b02c2d127a1fd08e6dea3777ca93d3529ccf8e3a9edd59e62a35739c8b911c4c-runc.68rIE8.mount: Deactivated successfully. Aug 13 00:03:22.842774 systemd-networkd[1771]: cilium_host: Link UP Aug 13 00:03:22.843482 systemd-networkd[1771]: cilium_net: Link UP Aug 13 00:03:22.843489 systemd-networkd[1771]: cilium_net: Gained carrier Aug 13 00:03:22.850436 systemd-networkd[1771]: cilium_host: Gained carrier Aug 13 00:03:22.850901 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Aug 13 00:03:23.047684 systemd-networkd[1771]: cilium_vxlan: Link UP Aug 13 00:03:23.047691 systemd-networkd[1771]: cilium_vxlan: Gained carrier Aug 13 00:03:23.318912 kernel: NET: Registered PF_ALG protocol family Aug 13 00:03:23.332103 systemd-networkd[1771]: cilium_net: Gained IPv6LL Aug 13 00:03:23.499075 systemd-networkd[1771]: cilium_host: Gained IPv6LL Aug 13 00:03:24.197083 systemd-networkd[1771]: lxc_health: Link UP Aug 13 00:03:24.216092 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Aug 13 00:03:24.216638 systemd-networkd[1771]: lxc_health: Gained carrier Aug 13 00:03:24.291310 kubelet[2662]: I0813 00:03:24.291246 2662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-4s9cw" podStartSLOduration=10.64493174 podStartE2EDuration="19.291228603s" podCreationTimestamp="2025-08-13 00:03:05 +0000 UTC" firstStartedPulling="2025-08-13 00:03:06.376189892 +0000 UTC m=+6.280217483" lastFinishedPulling="2025-08-13 00:03:15.022486755 +0000 UTC m=+14.926514346" observedRunningTime="2025-08-13 00:03:21.346821211 +0000 UTC m=+21.250848802" watchObservedRunningTime="2025-08-13 00:03:24.291228603 +0000 UTC m=+24.195256154" Aug 13 00:03:24.543302 systemd-networkd[1771]: lxc76ccbf6b2307: Link UP Aug 13 00:03:24.562178 kernel: eth0: renamed from tmp633a7 Aug 13 00:03:24.562395 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc76ccbf6b2307: link becomes ready Aug 13 00:03:24.562572 systemd-networkd[1771]: lxc76ccbf6b2307: Gained carrier Aug 13 00:03:24.585741 systemd-networkd[1771]: lxcfbe0dd6e01e0: Link UP Aug 13 00:03:24.595911 kernel: eth0: renamed from tmpdb5fd Aug 13 00:03:24.614594 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcfbe0dd6e01e0: link becomes ready Aug 13 00:03:24.614088 systemd-networkd[1771]: lxcfbe0dd6e01e0: Gained carrier Aug 13 00:03:24.715101 systemd-networkd[1771]: cilium_vxlan: Gained IPv6LL Aug 13 00:03:25.355092 systemd-networkd[1771]: lxc_health: Gained IPv6LL Aug 13 00:03:25.803072 systemd-networkd[1771]: lxcfbe0dd6e01e0: Gained IPv6LL Aug 13 00:03:26.059120 systemd-networkd[1771]: lxc76ccbf6b2307: Gained IPv6LL Aug 13 00:03:28.253707 env[1581]: time="2025-08-13T00:03:28.253642129Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:03:28.254126 env[1581]: time="2025-08-13T00:03:28.254097602Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:03:28.254213 env[1581]: time="2025-08-13T00:03:28.254193480Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:03:28.254430 env[1581]: time="2025-08-13T00:03:28.254402277Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/db5fd8f7bd8b9f23d3f383ba26e1278e716deb50f5099325e28dcb3bf4a5c538 pid=3828 runtime=io.containerd.runc.v2 Aug 13 00:03:28.300080 env[1581]: time="2025-08-13T00:03:28.291672578Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:03:28.300080 env[1581]: time="2025-08-13T00:03:28.291714537Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:03:28.300080 env[1581]: time="2025-08-13T00:03:28.291725177Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:03:28.300080 env[1581]: time="2025-08-13T00:03:28.291917494Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/633a7a5d3c9d41fa3e90308a3d44274504dbe6ffd98783b59b7756023c22b57a pid=3856 runtime=io.containerd.runc.v2 Aug 13 00:03:28.371099 env[1581]: time="2025-08-13T00:03:28.370956426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-vmclk,Uid:d627fc55-a83c-4ae4-81cb-86ea6dd784ec,Namespace:kube-system,Attempt:0,} returns sandbox id \"db5fd8f7bd8b9f23d3f383ba26e1278e716deb50f5099325e28dcb3bf4a5c538\"" Aug 13 00:03:28.375362 env[1581]: time="2025-08-13T00:03:28.375315478Z" level=info msg="CreateContainer within sandbox \"db5fd8f7bd8b9f23d3f383ba26e1278e716deb50f5099325e28dcb3bf4a5c538\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 00:03:28.394257 env[1581]: time="2025-08-13T00:03:28.394217265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-7ngt6,Uid:43c3cef9-a1f3-48eb-b653-959a499cf628,Namespace:kube-system,Attempt:0,} returns sandbox id \"633a7a5d3c9d41fa3e90308a3d44274504dbe6ffd98783b59b7756023c22b57a\"" Aug 13 00:03:28.403194 env[1581]: time="2025-08-13T00:03:28.403152566Z" level=info msg="CreateContainer within sandbox \"633a7a5d3c9d41fa3e90308a3d44274504dbe6ffd98783b59b7756023c22b57a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 00:03:28.408361 env[1581]: time="2025-08-13T00:03:28.408323765Z" level=info msg="CreateContainer within sandbox \"db5fd8f7bd8b9f23d3f383ba26e1278e716deb50f5099325e28dcb3bf4a5c538\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"43e32d8bff36de6b938a7759c315f8ed36edf8b3057e9ae931ec82e0ad43437d\"" Aug 13 00:03:28.413958 env[1581]: time="2025-08-13T00:03:28.409015195Z" level=info msg="StartContainer for \"43e32d8bff36de6b938a7759c315f8ed36edf8b3057e9ae931ec82e0ad43437d\"" Aug 13 00:03:28.462141 env[1581]: time="2025-08-13T00:03:28.462082250Z" level=info msg="CreateContainer within sandbox \"633a7a5d3c9d41fa3e90308a3d44274504dbe6ffd98783b59b7756023c22b57a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0ace92e6a58954781a553e35d587390d6b4691066c4d8e948fa5345665e20b7c\"" Aug 13 00:03:28.466758 env[1581]: time="2025-08-13T00:03:28.465462878Z" level=info msg="StartContainer for \"0ace92e6a58954781a553e35d587390d6b4691066c4d8e948fa5345665e20b7c\"" Aug 13 00:03:28.467479 env[1581]: time="2025-08-13T00:03:28.467428527Z" level=info msg="StartContainer for \"43e32d8bff36de6b938a7759c315f8ed36edf8b3057e9ae931ec82e0ad43437d\" returns successfully" Aug 13 00:03:28.548224 env[1581]: time="2025-08-13T00:03:28.548087994Z" level=info msg="StartContainer for \"0ace92e6a58954781a553e35d587390d6b4691066c4d8e948fa5345665e20b7c\" returns successfully" Aug 13 00:03:29.379778 kubelet[2662]: I0813 00:03:29.379702 2662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-7ngt6" podStartSLOduration=23.379681441 podStartE2EDuration="23.379681441s" podCreationTimestamp="2025-08-13 00:03:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:03:29.36593109 +0000 UTC m=+29.269958681" watchObservedRunningTime="2025-08-13 00:03:29.379681441 +0000 UTC m=+29.283709032" Aug 13 00:03:29.380208 kubelet[2662]: I0813 00:03:29.380145 2662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-vmclk" podStartSLOduration=23.380139274 podStartE2EDuration="23.380139274s" podCreationTimestamp="2025-08-13 00:03:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:03:29.377523434 +0000 UTC m=+29.281551025" watchObservedRunningTime="2025-08-13 00:03:29.380139274 +0000 UTC m=+29.284166865" Aug 13 00:04:35.577119 systemd[1]: Started sshd@5-10.200.20.38:22-10.200.16.10:51456.service. Aug 13 00:04:36.064253 sshd[3991]: Accepted publickey for core from 10.200.16.10 port 51456 ssh2: RSA SHA256:J/je3NSfm2Jr+TQ4JtJfZPKSEEtI0uL9aC1/9TbPR4M Aug 13 00:04:36.065963 sshd[3991]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:04:36.069773 systemd-logind[1562]: New session 8 of user core. Aug 13 00:04:36.070219 systemd[1]: Started session-8.scope. Aug 13 00:04:36.524624 sshd[3991]: pam_unix(sshd:session): session closed for user core Aug 13 00:04:36.527339 systemd[1]: sshd@5-10.200.20.38:22-10.200.16.10:51456.service: Deactivated successfully. Aug 13 00:04:36.528320 systemd-logind[1562]: Session 8 logged out. Waiting for processes to exit. Aug 13 00:04:36.528361 systemd[1]: session-8.scope: Deactivated successfully. Aug 13 00:04:36.529499 systemd-logind[1562]: Removed session 8. Aug 13 00:04:41.604514 systemd[1]: Started sshd@6-10.200.20.38:22-10.200.16.10:56110.service. Aug 13 00:04:42.091590 sshd[4006]: Accepted publickey for core from 10.200.16.10 port 56110 ssh2: RSA SHA256:J/je3NSfm2Jr+TQ4JtJfZPKSEEtI0uL9aC1/9TbPR4M Aug 13 00:04:42.092677 sshd[4006]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:04:42.097035 systemd[1]: Started session-9.scope. Aug 13 00:04:42.097241 systemd-logind[1562]: New session 9 of user core. Aug 13 00:04:42.497509 sshd[4006]: pam_unix(sshd:session): session closed for user core Aug 13 00:04:42.500254 systemd-logind[1562]: Session 9 logged out. Waiting for processes to exit. Aug 13 00:04:42.500341 systemd[1]: sshd@6-10.200.20.38:22-10.200.16.10:56110.service: Deactivated successfully. Aug 13 00:04:42.501362 systemd[1]: session-9.scope: Deactivated successfully. Aug 13 00:04:42.501792 systemd-logind[1562]: Removed session 9. Aug 13 00:04:47.573417 systemd[1]: Started sshd@7-10.200.20.38:22-10.200.16.10:56124.service. Aug 13 00:04:48.045534 sshd[4019]: Accepted publickey for core from 10.200.16.10 port 56124 ssh2: RSA SHA256:J/je3NSfm2Jr+TQ4JtJfZPKSEEtI0uL9aC1/9TbPR4M Aug 13 00:04:48.047212 sshd[4019]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:04:48.051501 systemd[1]: Started session-10.scope. Aug 13 00:04:48.051904 systemd-logind[1562]: New session 10 of user core. Aug 13 00:04:48.446088 sshd[4019]: pam_unix(sshd:session): session closed for user core Aug 13 00:04:48.448721 systemd[1]: sshd@7-10.200.20.38:22-10.200.16.10:56124.service: Deactivated successfully. Aug 13 00:04:48.448887 systemd-logind[1562]: Session 10 logged out. Waiting for processes to exit. Aug 13 00:04:48.449485 systemd[1]: session-10.scope: Deactivated successfully. Aug 13 00:04:48.450200 systemd-logind[1562]: Removed session 10. Aug 13 00:04:53.525316 systemd[1]: Started sshd@8-10.200.20.38:22-10.200.16.10:52832.service. Aug 13 00:04:54.012533 sshd[4032]: Accepted publickey for core from 10.200.16.10 port 52832 ssh2: RSA SHA256:J/je3NSfm2Jr+TQ4JtJfZPKSEEtI0uL9aC1/9TbPR4M Aug 13 00:04:54.013821 sshd[4032]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:04:54.018259 systemd[1]: Started session-11.scope. Aug 13 00:04:54.019275 systemd-logind[1562]: New session 11 of user core. Aug 13 00:04:54.421844 sshd[4032]: pam_unix(sshd:session): session closed for user core Aug 13 00:04:54.424546 systemd[1]: sshd@8-10.200.20.38:22-10.200.16.10:52832.service: Deactivated successfully. Aug 13 00:04:54.425819 systemd[1]: session-11.scope: Deactivated successfully. Aug 13 00:04:54.426262 systemd-logind[1562]: Session 11 logged out. Waiting for processes to exit. Aug 13 00:04:54.427268 systemd-logind[1562]: Removed session 11. Aug 13 00:04:59.501362 systemd[1]: Started sshd@9-10.200.20.38:22-10.200.16.10:52836.service. Aug 13 00:04:59.989463 sshd[4046]: Accepted publickey for core from 10.200.16.10 port 52836 ssh2: RSA SHA256:J/je3NSfm2Jr+TQ4JtJfZPKSEEtI0uL9aC1/9TbPR4M Aug 13 00:04:59.991758 sshd[4046]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:04:59.996936 systemd[1]: Started session-12.scope. Aug 13 00:04:59.997240 systemd-logind[1562]: New session 12 of user core. Aug 13 00:05:00.413094 sshd[4046]: pam_unix(sshd:session): session closed for user core Aug 13 00:05:00.415689 systemd[1]: sshd@9-10.200.20.38:22-10.200.16.10:52836.service: Deactivated successfully. Aug 13 00:05:00.415851 systemd-logind[1562]: Session 12 logged out. Waiting for processes to exit. Aug 13 00:05:00.416489 systemd[1]: session-12.scope: Deactivated successfully. Aug 13 00:05:00.417081 systemd-logind[1562]: Removed session 12. Aug 13 00:05:00.489530 systemd[1]: Started sshd@10-10.200.20.38:22-10.200.16.10:42394.service. Aug 13 00:05:00.962768 sshd[4062]: Accepted publickey for core from 10.200.16.10 port 42394 ssh2: RSA SHA256:J/je3NSfm2Jr+TQ4JtJfZPKSEEtI0uL9aC1/9TbPR4M Aug 13 00:05:00.964207 sshd[4062]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:05:00.968726 systemd[1]: Started session-13.scope. Aug 13 00:05:00.969076 systemd-logind[1562]: New session 13 of user core. Aug 13 00:05:01.420742 sshd[4062]: pam_unix(sshd:session): session closed for user core Aug 13 00:05:01.423198 systemd[1]: sshd@10-10.200.20.38:22-10.200.16.10:42394.service: Deactivated successfully. Aug 13 00:05:01.424350 systemd[1]: session-13.scope: Deactivated successfully. Aug 13 00:05:01.424379 systemd-logind[1562]: Session 13 logged out. Waiting for processes to exit. Aug 13 00:05:01.425365 systemd-logind[1562]: Removed session 13. Aug 13 00:05:01.498998 systemd[1]: Started sshd@11-10.200.20.38:22-10.200.16.10:42402.service. Aug 13 00:05:01.988026 sshd[4074]: Accepted publickey for core from 10.200.16.10 port 42402 ssh2: RSA SHA256:J/je3NSfm2Jr+TQ4JtJfZPKSEEtI0uL9aC1/9TbPR4M Aug 13 00:05:01.989802 sshd[4074]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:05:01.993917 systemd-logind[1562]: New session 14 of user core. Aug 13 00:05:01.994379 systemd[1]: Started session-14.scope. Aug 13 00:05:02.403302 sshd[4074]: pam_unix(sshd:session): session closed for user core Aug 13 00:05:02.405733 systemd-logind[1562]: Session 14 logged out. Waiting for processes to exit. Aug 13 00:05:02.405898 systemd[1]: sshd@11-10.200.20.38:22-10.200.16.10:42402.service: Deactivated successfully. Aug 13 00:05:02.406709 systemd[1]: session-14.scope: Deactivated successfully. Aug 13 00:05:02.407148 systemd-logind[1562]: Removed session 14. Aug 13 00:05:07.477754 systemd[1]: Started sshd@12-10.200.20.38:22-10.200.16.10:42408.service. Aug 13 00:05:07.944932 sshd[4089]: Accepted publickey for core from 10.200.16.10 port 42408 ssh2: RSA SHA256:J/je3NSfm2Jr+TQ4JtJfZPKSEEtI0uL9aC1/9TbPR4M Aug 13 00:05:07.946560 sshd[4089]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:05:07.951069 systemd[1]: Started session-15.scope. Aug 13 00:05:07.951381 systemd-logind[1562]: New session 15 of user core. Aug 13 00:05:08.361097 sshd[4089]: pam_unix(sshd:session): session closed for user core Aug 13 00:05:08.363666 systemd[1]: sshd@12-10.200.20.38:22-10.200.16.10:42408.service: Deactivated successfully. Aug 13 00:05:08.364431 systemd[1]: session-15.scope: Deactivated successfully. Aug 13 00:05:08.364808 systemd-logind[1562]: Session 15 logged out. Waiting for processes to exit. Aug 13 00:05:08.365482 systemd-logind[1562]: Removed session 15. Aug 13 00:05:13.440697 systemd[1]: Started sshd@13-10.200.20.38:22-10.200.16.10:46744.service. Aug 13 00:05:13.929043 sshd[4102]: Accepted publickey for core from 10.200.16.10 port 46744 ssh2: RSA SHA256:J/je3NSfm2Jr+TQ4JtJfZPKSEEtI0uL9aC1/9TbPR4M Aug 13 00:05:13.930720 sshd[4102]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:05:13.935119 systemd[1]: Started session-16.scope. Aug 13 00:05:13.935413 systemd-logind[1562]: New session 16 of user core. Aug 13 00:05:14.354378 sshd[4102]: pam_unix(sshd:session): session closed for user core Aug 13 00:05:14.357295 systemd[1]: sshd@13-10.200.20.38:22-10.200.16.10:46744.service: Deactivated successfully. Aug 13 00:05:14.358074 systemd[1]: session-16.scope: Deactivated successfully. Aug 13 00:05:14.358438 systemd-logind[1562]: Session 16 logged out. Waiting for processes to exit. Aug 13 00:05:14.359328 systemd-logind[1562]: Removed session 16. Aug 13 00:05:14.433730 systemd[1]: Started sshd@14-10.200.20.38:22-10.200.16.10:46758.service. Aug 13 00:05:14.918991 sshd[4115]: Accepted publickey for core from 10.200.16.10 port 46758 ssh2: RSA SHA256:J/je3NSfm2Jr+TQ4JtJfZPKSEEtI0uL9aC1/9TbPR4M Aug 13 00:05:14.920615 sshd[4115]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:05:14.924866 systemd[1]: Started session-17.scope. Aug 13 00:05:14.926060 systemd-logind[1562]: New session 17 of user core. Aug 13 00:05:15.364680 sshd[4115]: pam_unix(sshd:session): session closed for user core Aug 13 00:05:15.367166 systemd-logind[1562]: Session 17 logged out. Waiting for processes to exit. Aug 13 00:05:15.367306 systemd[1]: sshd@14-10.200.20.38:22-10.200.16.10:46758.service: Deactivated successfully. Aug 13 00:05:15.368126 systemd[1]: session-17.scope: Deactivated successfully. Aug 13 00:05:15.368532 systemd-logind[1562]: Removed session 17. Aug 13 00:05:15.444329 systemd[1]: Started sshd@15-10.200.20.38:22-10.200.16.10:46774.service. Aug 13 00:05:15.932394 sshd[4125]: Accepted publickey for core from 10.200.16.10 port 46774 ssh2: RSA SHA256:J/je3NSfm2Jr+TQ4JtJfZPKSEEtI0uL9aC1/9TbPR4M Aug 13 00:05:15.933803 sshd[4125]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:05:15.937976 systemd-logind[1562]: New session 18 of user core. Aug 13 00:05:15.938225 systemd[1]: Started session-18.scope. Aug 13 00:05:17.439152 sshd[4125]: pam_unix(sshd:session): session closed for user core Aug 13 00:05:17.441735 systemd[1]: sshd@15-10.200.20.38:22-10.200.16.10:46774.service: Deactivated successfully. Aug 13 00:05:17.442700 systemd[1]: session-18.scope: Deactivated successfully. Aug 13 00:05:17.442726 systemd-logind[1562]: Session 18 logged out. Waiting for processes to exit. Aug 13 00:05:17.443698 systemd-logind[1562]: Removed session 18. Aug 13 00:05:17.517966 systemd[1]: Started sshd@16-10.200.20.38:22-10.200.16.10:46778.service. Aug 13 00:05:18.006006 sshd[4143]: Accepted publickey for core from 10.200.16.10 port 46778 ssh2: RSA SHA256:J/je3NSfm2Jr+TQ4JtJfZPKSEEtI0uL9aC1/9TbPR4M Aug 13 00:05:18.007301 sshd[4143]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:05:18.011678 systemd[1]: Started session-19.scope. Aug 13 00:05:18.012027 systemd-logind[1562]: New session 19 of user core. Aug 13 00:05:18.515374 sshd[4143]: pam_unix(sshd:session): session closed for user core Aug 13 00:05:18.518140 systemd[1]: sshd@16-10.200.20.38:22-10.200.16.10:46778.service: Deactivated successfully. Aug 13 00:05:18.519333 systemd[1]: session-19.scope: Deactivated successfully. Aug 13 00:05:18.519801 systemd-logind[1562]: Session 19 logged out. Waiting for processes to exit. Aug 13 00:05:18.520816 systemd-logind[1562]: Removed session 19. Aug 13 00:05:18.589578 systemd[1]: Started sshd@17-10.200.20.38:22-10.200.16.10:46790.service. Aug 13 00:05:19.039422 sshd[4153]: Accepted publickey for core from 10.200.16.10 port 46790 ssh2: RSA SHA256:J/je3NSfm2Jr+TQ4JtJfZPKSEEtI0uL9aC1/9TbPR4M Aug 13 00:05:19.041133 sshd[4153]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:05:19.044978 systemd-logind[1562]: New session 20 of user core. Aug 13 00:05:19.045345 systemd[1]: Started session-20.scope. Aug 13 00:05:19.421179 sshd[4153]: pam_unix(sshd:session): session closed for user core Aug 13 00:05:19.424083 systemd-logind[1562]: Session 20 logged out. Waiting for processes to exit. Aug 13 00:05:19.424298 systemd[1]: sshd@17-10.200.20.38:22-10.200.16.10:46790.service: Deactivated successfully. Aug 13 00:05:19.425113 systemd[1]: session-20.scope: Deactivated successfully. Aug 13 00:05:19.425559 systemd-logind[1562]: Removed session 20. Aug 13 00:05:24.498574 systemd[1]: Started sshd@18-10.200.20.38:22-10.200.16.10:45414.service. Aug 13 00:05:24.971644 sshd[4169]: Accepted publickey for core from 10.200.16.10 port 45414 ssh2: RSA SHA256:J/je3NSfm2Jr+TQ4JtJfZPKSEEtI0uL9aC1/9TbPR4M Aug 13 00:05:24.973345 sshd[4169]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:05:24.977841 systemd[1]: Started session-21.scope. Aug 13 00:05:24.978999 systemd-logind[1562]: New session 21 of user core. Aug 13 00:05:25.388262 sshd[4169]: pam_unix(sshd:session): session closed for user core Aug 13 00:05:25.391286 systemd[1]: sshd@18-10.200.20.38:22-10.200.16.10:45414.service: Deactivated successfully. Aug 13 00:05:25.392065 systemd[1]: session-21.scope: Deactivated successfully. Aug 13 00:05:25.392553 systemd-logind[1562]: Session 21 logged out. Waiting for processes to exit. Aug 13 00:05:25.393293 systemd-logind[1562]: Removed session 21. Aug 13 00:05:30.468520 systemd[1]: Started sshd@19-10.200.20.38:22-10.200.16.10:37978.service. Aug 13 00:05:30.956993 sshd[4182]: Accepted publickey for core from 10.200.16.10 port 37978 ssh2: RSA SHA256:J/je3NSfm2Jr+TQ4JtJfZPKSEEtI0uL9aC1/9TbPR4M Aug 13 00:05:30.958747 sshd[4182]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:05:30.963949 systemd[1]: Started session-22.scope. Aug 13 00:05:30.964444 systemd-logind[1562]: New session 22 of user core. Aug 13 00:05:31.386197 sshd[4182]: pam_unix(sshd:session): session closed for user core Aug 13 00:05:31.388731 systemd[1]: sshd@19-10.200.20.38:22-10.200.16.10:37978.service: Deactivated successfully. Aug 13 00:05:31.389951 systemd[1]: session-22.scope: Deactivated successfully. Aug 13 00:05:31.390174 systemd-logind[1562]: Session 22 logged out. Waiting for processes to exit. Aug 13 00:05:31.391334 systemd-logind[1562]: Removed session 22. Aug 13 00:05:36.464624 systemd[1]: Started sshd@20-10.200.20.38:22-10.200.16.10:37992.service. Aug 13 00:05:36.938862 sshd[4195]: Accepted publickey for core from 10.200.16.10 port 37992 ssh2: RSA SHA256:J/je3NSfm2Jr+TQ4JtJfZPKSEEtI0uL9aC1/9TbPR4M Aug 13 00:05:36.940469 sshd[4195]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:05:36.944705 systemd[1]: Started session-23.scope. Aug 13 00:05:36.945216 systemd-logind[1562]: New session 23 of user core. Aug 13 00:05:37.338109 sshd[4195]: pam_unix(sshd:session): session closed for user core Aug 13 00:05:37.340793 systemd-logind[1562]: Session 23 logged out. Waiting for processes to exit. Aug 13 00:05:37.341482 systemd[1]: sshd@20-10.200.20.38:22-10.200.16.10:37992.service: Deactivated successfully. Aug 13 00:05:37.342273 systemd[1]: session-23.scope: Deactivated successfully. Aug 13 00:05:37.342936 systemd-logind[1562]: Removed session 23. Aug 13 00:05:37.414626 systemd[1]: Started sshd@21-10.200.20.38:22-10.200.16.10:38006.service. Aug 13 00:05:37.888386 sshd[4209]: Accepted publickey for core from 10.200.16.10 port 38006 ssh2: RSA SHA256:J/je3NSfm2Jr+TQ4JtJfZPKSEEtI0uL9aC1/9TbPR4M Aug 13 00:05:37.889834 sshd[4209]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:05:37.893954 systemd-logind[1562]: New session 24 of user core. Aug 13 00:05:37.894370 systemd[1]: Started session-24.scope. Aug 13 00:05:39.809040 env[1581]: time="2025-08-13T00:05:39.806020032Z" level=info msg="StopContainer for \"9d428362348a1400f27cc93761018bb80e9ee3b2effeb4286ece13c1731918ed\" with timeout 30 (s)" Aug 13 00:05:39.808376 systemd[1]: run-containerd-runc-k8s.io-b02c2d127a1fd08e6dea3777ca93d3529ccf8e3a9edd59e62a35739c8b911c4c-runc.3XnOEA.mount: Deactivated successfully. Aug 13 00:05:39.809809 env[1581]: time="2025-08-13T00:05:39.809695841Z" level=info msg="Stop container \"9d428362348a1400f27cc93761018bb80e9ee3b2effeb4286ece13c1731918ed\" with signal terminated" Aug 13 00:05:39.825477 env[1581]: time="2025-08-13T00:05:39.825414797Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 00:05:39.831274 env[1581]: time="2025-08-13T00:05:39.831243730Z" level=info msg="StopContainer for \"b02c2d127a1fd08e6dea3777ca93d3529ccf8e3a9edd59e62a35739c8b911c4c\" with timeout 2 (s)" Aug 13 00:05:39.831620 env[1581]: time="2025-08-13T00:05:39.831598731Z" level=info msg="Stop container \"b02c2d127a1fd08e6dea3777ca93d3529ccf8e3a9edd59e62a35739c8b911c4c\" with signal terminated" Aug 13 00:05:39.838539 systemd-networkd[1771]: lxc_health: Link DOWN Aug 13 00:05:39.838545 systemd-networkd[1771]: lxc_health: Lost carrier Aug 13 00:05:39.845210 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9d428362348a1400f27cc93761018bb80e9ee3b2effeb4286ece13c1731918ed-rootfs.mount: Deactivated successfully. Aug 13 00:05:39.874329 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b02c2d127a1fd08e6dea3777ca93d3529ccf8e3a9edd59e62a35739c8b911c4c-rootfs.mount: Deactivated successfully. Aug 13 00:05:39.881640 env[1581]: time="2025-08-13T00:05:39.881584126Z" level=info msg="shim disconnected" id=9d428362348a1400f27cc93761018bb80e9ee3b2effeb4286ece13c1731918ed Aug 13 00:05:39.882457 env[1581]: time="2025-08-13T00:05:39.881804566Z" level=warning msg="cleaning up after shim disconnected" id=9d428362348a1400f27cc93761018bb80e9ee3b2effeb4286ece13c1731918ed namespace=k8s.io Aug 13 00:05:39.882457 env[1581]: time="2025-08-13T00:05:39.881820246Z" level=info msg="cleaning up dead shim" Aug 13 00:05:39.889857 env[1581]: time="2025-08-13T00:05:39.889815945Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:05:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4279 runtime=io.containerd.runc.v2\n" Aug 13 00:05:39.899574 env[1581]: time="2025-08-13T00:05:39.899522807Z" level=info msg="shim disconnected" id=b02c2d127a1fd08e6dea3777ca93d3529ccf8e3a9edd59e62a35739c8b911c4c Aug 13 00:05:39.899783 env[1581]: time="2025-08-13T00:05:39.899649447Z" level=warning msg="cleaning up after shim disconnected" id=b02c2d127a1fd08e6dea3777ca93d3529ccf8e3a9edd59e62a35739c8b911c4c namespace=k8s.io Aug 13 00:05:39.899783 env[1581]: time="2025-08-13T00:05:39.899662887Z" level=info msg="cleaning up dead shim" Aug 13 00:05:39.900900 env[1581]: time="2025-08-13T00:05:39.900853850Z" level=info msg="StopContainer for \"9d428362348a1400f27cc93761018bb80e9ee3b2effeb4286ece13c1731918ed\" returns successfully" Aug 13 00:05:39.901476 env[1581]: time="2025-08-13T00:05:39.901447892Z" level=info msg="StopPodSandbox for \"02ddcb7a190243947389ad26948f292405dc4244ab9054d40b23d154a87115c2\"" Aug 13 00:05:39.901542 env[1581]: time="2025-08-13T00:05:39.901510452Z" level=info msg="Container to stop \"9d428362348a1400f27cc93761018bb80e9ee3b2effeb4286ece13c1731918ed\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:05:39.908333 env[1581]: time="2025-08-13T00:05:39.908301227Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:05:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4292 runtime=io.containerd.runc.v2\n" Aug 13 00:05:39.913736 env[1581]: time="2025-08-13T00:05:39.913040478Z" level=info msg="StopContainer for \"b02c2d127a1fd08e6dea3777ca93d3529ccf8e3a9edd59e62a35739c8b911c4c\" returns successfully" Aug 13 00:05:39.914416 env[1581]: time="2025-08-13T00:05:39.914211361Z" level=info msg="StopPodSandbox for \"dd4c7dcb94ea4b60d3e1a1885be8be5ac1379c28fed610c23ece2d3a10da5e38\"" Aug 13 00:05:39.914416 env[1581]: time="2025-08-13T00:05:39.914267921Z" level=info msg="Container to stop \"b03a11575b7e67e7934a539839601f98ab8ee9dbabd0ef797e7d05a7c9cb7dde\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:05:39.914416 env[1581]: time="2025-08-13T00:05:39.914281641Z" level=info msg="Container to stop \"8b4ca441f8441235fb98bbfdd7280ed1885e33d66d01571a682c2bd4aef42db9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:05:39.914416 env[1581]: time="2025-08-13T00:05:39.914292481Z" level=info msg="Container to stop \"b02c2d127a1fd08e6dea3777ca93d3529ccf8e3a9edd59e62a35739c8b911c4c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:05:39.914416 env[1581]: time="2025-08-13T00:05:39.914306001Z" level=info msg="Container to stop \"7ef903890ccf524e6af921b149872b4be3376ceb1a6f70293a4918a3c51b9146\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:05:39.914416 env[1581]: time="2025-08-13T00:05:39.914316841Z" level=info msg="Container to stop \"d3550e64cbd03f6059b6c961715f3010f0e57293ae1582610110477d72c2a8a3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:05:39.947556 env[1581]: time="2025-08-13T00:05:39.947503917Z" level=info msg="shim disconnected" id=02ddcb7a190243947389ad26948f292405dc4244ab9054d40b23d154a87115c2 Aug 13 00:05:39.948180 env[1581]: time="2025-08-13T00:05:39.948156679Z" level=warning msg="cleaning up after shim disconnected" id=02ddcb7a190243947389ad26948f292405dc4244ab9054d40b23d154a87115c2 namespace=k8s.io Aug 13 00:05:39.948307 env[1581]: time="2025-08-13T00:05:39.948279999Z" level=info msg="cleaning up dead shim" Aug 13 00:05:39.956969 env[1581]: time="2025-08-13T00:05:39.956938619Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:05:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4348 runtime=io.containerd.runc.v2\n" Aug 13 00:05:39.957803 env[1581]: time="2025-08-13T00:05:39.957316060Z" level=info msg="TearDown network for sandbox \"02ddcb7a190243947389ad26948f292405dc4244ab9054d40b23d154a87115c2\" successfully" Aug 13 00:05:39.957803 env[1581]: time="2025-08-13T00:05:39.957365060Z" level=info msg="StopPodSandbox for \"02ddcb7a190243947389ad26948f292405dc4244ab9054d40b23d154a87115c2\" returns successfully" Aug 13 00:05:39.960012 env[1581]: time="2025-08-13T00:05:39.959975866Z" level=info msg="shim disconnected" id=dd4c7dcb94ea4b60d3e1a1885be8be5ac1379c28fed610c23ece2d3a10da5e38 Aug 13 00:05:39.961564 env[1581]: time="2025-08-13T00:05:39.960924788Z" level=warning msg="cleaning up after shim disconnected" id=dd4c7dcb94ea4b60d3e1a1885be8be5ac1379c28fed610c23ece2d3a10da5e38 namespace=k8s.io Aug 13 00:05:39.961564 env[1581]: time="2025-08-13T00:05:39.960949908Z" level=info msg="cleaning up dead shim" Aug 13 00:05:39.968009 env[1581]: time="2025-08-13T00:05:39.967978044Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:05:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4363 runtime=io.containerd.runc.v2\n" Aug 13 00:05:39.968285 env[1581]: time="2025-08-13T00:05:39.968259285Z" level=info msg="TearDown network for sandbox \"dd4c7dcb94ea4b60d3e1a1885be8be5ac1379c28fed610c23ece2d3a10da5e38\" successfully" Aug 13 00:05:39.968328 env[1581]: time="2025-08-13T00:05:39.968284725Z" level=info msg="StopPodSandbox for \"dd4c7dcb94ea4b60d3e1a1885be8be5ac1379c28fed610c23ece2d3a10da5e38\" returns successfully" Aug 13 00:05:40.032991 kubelet[2662]: I0813 00:05:40.032956 2662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f7607543-368e-4809-997d-75a32727f91e-hubble-tls\") pod \"f7607543-368e-4809-997d-75a32727f91e\" (UID: \"f7607543-368e-4809-997d-75a32727f91e\") " Aug 13 00:05:40.032991 kubelet[2662]: I0813 00:05:40.032995 2662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f7607543-368e-4809-997d-75a32727f91e-host-proc-sys-net\") pod \"f7607543-368e-4809-997d-75a32727f91e\" (UID: \"f7607543-368e-4809-997d-75a32727f91e\") " Aug 13 00:05:40.033366 kubelet[2662]: I0813 00:05:40.033112 2662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7fgs5\" (UniqueName: \"kubernetes.io/projected/724fbe26-40f5-4a64-8cdb-b7ada888e4cf-kube-api-access-7fgs5\") pod \"724fbe26-40f5-4a64-8cdb-b7ada888e4cf\" (UID: \"724fbe26-40f5-4a64-8cdb-b7ada888e4cf\") " Aug 13 00:05:40.033366 kubelet[2662]: I0813 00:05:40.033132 2662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f7607543-368e-4809-997d-75a32727f91e-cilium-cgroup\") pod \"f7607543-368e-4809-997d-75a32727f91e\" (UID: \"f7607543-368e-4809-997d-75a32727f91e\") " Aug 13 00:05:40.033366 kubelet[2662]: I0813 00:05:40.033145 2662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f7607543-368e-4809-997d-75a32727f91e-etc-cni-netd\") pod \"f7607543-368e-4809-997d-75a32727f91e\" (UID: \"f7607543-368e-4809-997d-75a32727f91e\") " Aug 13 00:05:40.033366 kubelet[2662]: I0813 00:05:40.033160 2662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f7607543-368e-4809-997d-75a32727f91e-xtables-lock\") pod \"f7607543-368e-4809-997d-75a32727f91e\" (UID: \"f7607543-368e-4809-997d-75a32727f91e\") " Aug 13 00:05:40.033366 kubelet[2662]: I0813 00:05:40.033203 2662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f7607543-368e-4809-997d-75a32727f91e-lib-modules\") pod \"f7607543-368e-4809-997d-75a32727f91e\" (UID: \"f7607543-368e-4809-997d-75a32727f91e\") " Aug 13 00:05:40.033366 kubelet[2662]: I0813 00:05:40.033218 2662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f7607543-368e-4809-997d-75a32727f91e-host-proc-sys-kernel\") pod \"f7607543-368e-4809-997d-75a32727f91e\" (UID: \"f7607543-368e-4809-997d-75a32727f91e\") " Aug 13 00:05:40.033549 kubelet[2662]: I0813 00:05:40.033239 2662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/724fbe26-40f5-4a64-8cdb-b7ada888e4cf-cilium-config-path\") pod \"724fbe26-40f5-4a64-8cdb-b7ada888e4cf\" (UID: \"724fbe26-40f5-4a64-8cdb-b7ada888e4cf\") " Aug 13 00:05:40.033549 kubelet[2662]: I0813 00:05:40.033267 2662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f7607543-368e-4809-997d-75a32727f91e-cni-path\") pod \"f7607543-368e-4809-997d-75a32727f91e\" (UID: \"f7607543-368e-4809-997d-75a32727f91e\") " Aug 13 00:05:40.033549 kubelet[2662]: I0813 00:05:40.033285 2662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f7607543-368e-4809-997d-75a32727f91e-cilium-config-path\") pod \"f7607543-368e-4809-997d-75a32727f91e\" (UID: \"f7607543-368e-4809-997d-75a32727f91e\") " Aug 13 00:05:40.033549 kubelet[2662]: I0813 00:05:40.033298 2662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f7607543-368e-4809-997d-75a32727f91e-hostproc\") pod \"f7607543-368e-4809-997d-75a32727f91e\" (UID: \"f7607543-368e-4809-997d-75a32727f91e\") " Aug 13 00:05:40.033549 kubelet[2662]: I0813 00:05:40.033319 2662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f7607543-368e-4809-997d-75a32727f91e-clustermesh-secrets\") pod \"f7607543-368e-4809-997d-75a32727f91e\" (UID: \"f7607543-368e-4809-997d-75a32727f91e\") " Aug 13 00:05:40.033549 kubelet[2662]: I0813 00:05:40.033346 2662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5sln\" (UniqueName: \"kubernetes.io/projected/f7607543-368e-4809-997d-75a32727f91e-kube-api-access-w5sln\") pod \"f7607543-368e-4809-997d-75a32727f91e\" (UID: \"f7607543-368e-4809-997d-75a32727f91e\") " Aug 13 00:05:40.033683 kubelet[2662]: I0813 00:05:40.033363 2662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f7607543-368e-4809-997d-75a32727f91e-bpf-maps\") pod \"f7607543-368e-4809-997d-75a32727f91e\" (UID: \"f7607543-368e-4809-997d-75a32727f91e\") " Aug 13 00:05:40.033683 kubelet[2662]: I0813 00:05:40.033378 2662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f7607543-368e-4809-997d-75a32727f91e-cilium-run\") pod \"f7607543-368e-4809-997d-75a32727f91e\" (UID: \"f7607543-368e-4809-997d-75a32727f91e\") " Aug 13 00:05:40.033683 kubelet[2662]: I0813 00:05:40.033432 2662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7607543-368e-4809-997d-75a32727f91e-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "f7607543-368e-4809-997d-75a32727f91e" (UID: "f7607543-368e-4809-997d-75a32727f91e"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:05:40.033683 kubelet[2662]: I0813 00:05:40.033467 2662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7607543-368e-4809-997d-75a32727f91e-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "f7607543-368e-4809-997d-75a32727f91e" (UID: "f7607543-368e-4809-997d-75a32727f91e"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:05:40.034918 kubelet[2662]: I0813 00:05:40.034011 2662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7607543-368e-4809-997d-75a32727f91e-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "f7607543-368e-4809-997d-75a32727f91e" (UID: "f7607543-368e-4809-997d-75a32727f91e"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:05:40.034918 kubelet[2662]: I0813 00:05:40.034045 2662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7607543-368e-4809-997d-75a32727f91e-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "f7607543-368e-4809-997d-75a32727f91e" (UID: "f7607543-368e-4809-997d-75a32727f91e"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:05:40.034918 kubelet[2662]: I0813 00:05:40.034062 2662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7607543-368e-4809-997d-75a32727f91e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f7607543-368e-4809-997d-75a32727f91e" (UID: "f7607543-368e-4809-997d-75a32727f91e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:05:40.034918 kubelet[2662]: I0813 00:05:40.034085 2662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7607543-368e-4809-997d-75a32727f91e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f7607543-368e-4809-997d-75a32727f91e" (UID: "f7607543-368e-4809-997d-75a32727f91e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:05:40.034918 kubelet[2662]: I0813 00:05:40.034099 2662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7607543-368e-4809-997d-75a32727f91e-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "f7607543-368e-4809-997d-75a32727f91e" (UID: "f7607543-368e-4809-997d-75a32727f91e"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:05:40.035929 kubelet[2662]: I0813 00:05:40.035900 2662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/724fbe26-40f5-4a64-8cdb-b7ada888e4cf-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "724fbe26-40f5-4a64-8cdb-b7ada888e4cf" (UID: "724fbe26-40f5-4a64-8cdb-b7ada888e4cf"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 00:05:40.036001 kubelet[2662]: I0813 00:05:40.035945 2662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7607543-368e-4809-997d-75a32727f91e-cni-path" (OuterVolumeSpecName: "cni-path") pod "f7607543-368e-4809-997d-75a32727f91e" (UID: "f7607543-368e-4809-997d-75a32727f91e"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:05:40.038166 kubelet[2662]: I0813 00:05:40.038133 2662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/724fbe26-40f5-4a64-8cdb-b7ada888e4cf-kube-api-access-7fgs5" (OuterVolumeSpecName: "kube-api-access-7fgs5") pod "724fbe26-40f5-4a64-8cdb-b7ada888e4cf" (UID: "724fbe26-40f5-4a64-8cdb-b7ada888e4cf"). InnerVolumeSpecName "kube-api-access-7fgs5". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 00:05:40.038765 kubelet[2662]: I0813 00:05:40.038733 2662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7607543-368e-4809-997d-75a32727f91e-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "f7607543-368e-4809-997d-75a32727f91e" (UID: "f7607543-368e-4809-997d-75a32727f91e"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:05:40.038831 kubelet[2662]: I0813 00:05:40.038766 2662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7607543-368e-4809-997d-75a32727f91e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f7607543-368e-4809-997d-75a32727f91e" (UID: "f7607543-368e-4809-997d-75a32727f91e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 00:05:40.038863 kubelet[2662]: I0813 00:05:40.038840 2662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7607543-368e-4809-997d-75a32727f91e-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "f7607543-368e-4809-997d-75a32727f91e" (UID: "f7607543-368e-4809-997d-75a32727f91e"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 00:05:40.038937 kubelet[2662]: I0813 00:05:40.038861 2662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7607543-368e-4809-997d-75a32727f91e-hostproc" (OuterVolumeSpecName: "hostproc") pod "f7607543-368e-4809-997d-75a32727f91e" (UID: "f7607543-368e-4809-997d-75a32727f91e"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:05:40.040645 kubelet[2662]: I0813 00:05:40.040621 2662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7607543-368e-4809-997d-75a32727f91e-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "f7607543-368e-4809-997d-75a32727f91e" (UID: "f7607543-368e-4809-997d-75a32727f91e"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 00:05:40.040906 kubelet[2662]: I0813 00:05:40.040858 2662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7607543-368e-4809-997d-75a32727f91e-kube-api-access-w5sln" (OuterVolumeSpecName: "kube-api-access-w5sln") pod "f7607543-368e-4809-997d-75a32727f91e" (UID: "f7607543-368e-4809-997d-75a32727f91e"). InnerVolumeSpecName "kube-api-access-w5sln". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 00:05:40.134198 kubelet[2662]: I0813 00:05:40.134166 2662 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f7607543-368e-4809-997d-75a32727f91e-lib-modules\") on node \"ci-3510.3.8-a-af9fafecff\" DevicePath \"\"" Aug 13 00:05:40.134398 kubelet[2662]: I0813 00:05:40.134385 2662 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f7607543-368e-4809-997d-75a32727f91e-host-proc-sys-kernel\") on node \"ci-3510.3.8-a-af9fafecff\" DevicePath \"\"" Aug 13 00:05:40.134487 kubelet[2662]: I0813 00:05:40.134475 2662 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/724fbe26-40f5-4a64-8cdb-b7ada888e4cf-cilium-config-path\") on node \"ci-3510.3.8-a-af9fafecff\" DevicePath \"\"" Aug 13 00:05:40.134554 kubelet[2662]: I0813 00:05:40.134544 2662 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f7607543-368e-4809-997d-75a32727f91e-cni-path\") on node \"ci-3510.3.8-a-af9fafecff\" DevicePath \"\"" Aug 13 00:05:40.134618 kubelet[2662]: I0813 00:05:40.134607 2662 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f7607543-368e-4809-997d-75a32727f91e-cilium-config-path\") on node \"ci-3510.3.8-a-af9fafecff\" DevicePath \"\"" Aug 13 00:05:40.134683 kubelet[2662]: I0813 00:05:40.134665 2662 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f7607543-368e-4809-997d-75a32727f91e-clustermesh-secrets\") on node \"ci-3510.3.8-a-af9fafecff\" DevicePath \"\"" Aug 13 00:05:40.134757 kubelet[2662]: I0813 00:05:40.134747 2662 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f7607543-368e-4809-997d-75a32727f91e-hostproc\") on node \"ci-3510.3.8-a-af9fafecff\" DevicePath \"\"" Aug 13 00:05:40.134819 kubelet[2662]: I0813 00:05:40.134810 2662 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f7607543-368e-4809-997d-75a32727f91e-bpf-maps\") on node \"ci-3510.3.8-a-af9fafecff\" DevicePath \"\"" Aug 13 00:05:40.134902 kubelet[2662]: I0813 00:05:40.134869 2662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w5sln\" (UniqueName: \"kubernetes.io/projected/f7607543-368e-4809-997d-75a32727f91e-kube-api-access-w5sln\") on node \"ci-3510.3.8-a-af9fafecff\" DevicePath \"\"" Aug 13 00:05:40.134981 kubelet[2662]: I0813 00:05:40.134971 2662 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f7607543-368e-4809-997d-75a32727f91e-cilium-run\") on node \"ci-3510.3.8-a-af9fafecff\" DevicePath \"\"" Aug 13 00:05:40.135060 kubelet[2662]: I0813 00:05:40.135050 2662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7fgs5\" (UniqueName: \"kubernetes.io/projected/724fbe26-40f5-4a64-8cdb-b7ada888e4cf-kube-api-access-7fgs5\") on node \"ci-3510.3.8-a-af9fafecff\" DevicePath \"\"" Aug 13 00:05:40.135133 kubelet[2662]: I0813 00:05:40.135109 2662 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f7607543-368e-4809-997d-75a32727f91e-hubble-tls\") on node \"ci-3510.3.8-a-af9fafecff\" DevicePath \"\"" Aug 13 00:05:40.135206 kubelet[2662]: I0813 00:05:40.135186 2662 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f7607543-368e-4809-997d-75a32727f91e-host-proc-sys-net\") on node \"ci-3510.3.8-a-af9fafecff\" DevicePath \"\"" Aug 13 00:05:40.135269 kubelet[2662]: I0813 00:05:40.135259 2662 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f7607543-368e-4809-997d-75a32727f91e-cilium-cgroup\") on node \"ci-3510.3.8-a-af9fafecff\" DevicePath \"\"" Aug 13 00:05:40.135344 kubelet[2662]: I0813 00:05:40.135335 2662 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f7607543-368e-4809-997d-75a32727f91e-etc-cni-netd\") on node \"ci-3510.3.8-a-af9fafecff\" DevicePath \"\"" Aug 13 00:05:40.135430 kubelet[2662]: I0813 00:05:40.135421 2662 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f7607543-368e-4809-997d-75a32727f91e-xtables-lock\") on node \"ci-3510.3.8-a-af9fafecff\" DevicePath \"\"" Aug 13 00:05:40.341492 kubelet[2662]: E0813 00:05:40.341459 2662 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 13 00:05:40.586817 kubelet[2662]: I0813 00:05:40.586719 2662 scope.go:117] "RemoveContainer" containerID="b02c2d127a1fd08e6dea3777ca93d3529ccf8e3a9edd59e62a35739c8b911c4c" Aug 13 00:05:40.594971 env[1581]: time="2025-08-13T00:05:40.594672671Z" level=info msg="RemoveContainer for \"b02c2d127a1fd08e6dea3777ca93d3529ccf8e3a9edd59e62a35739c8b911c4c\"" Aug 13 00:05:40.605514 env[1581]: time="2025-08-13T00:05:40.605472735Z" level=info msg="RemoveContainer for \"b02c2d127a1fd08e6dea3777ca93d3529ccf8e3a9edd59e62a35739c8b911c4c\" returns successfully" Aug 13 00:05:40.610494 kubelet[2662]: I0813 00:05:40.610031 2662 scope.go:117] "RemoveContainer" containerID="d3550e64cbd03f6059b6c961715f3010f0e57293ae1582610110477d72c2a8a3" Aug 13 00:05:40.612598 env[1581]: time="2025-08-13T00:05:40.612339990Z" level=info msg="RemoveContainer for \"d3550e64cbd03f6059b6c961715f3010f0e57293ae1582610110477d72c2a8a3\"" Aug 13 00:05:40.620268 env[1581]: time="2025-08-13T00:05:40.619805846Z" level=info msg="RemoveContainer for \"d3550e64cbd03f6059b6c961715f3010f0e57293ae1582610110477d72c2a8a3\" returns successfully" Aug 13 00:05:40.623024 kubelet[2662]: I0813 00:05:40.622997 2662 scope.go:117] "RemoveContainer" containerID="8b4ca441f8441235fb98bbfdd7280ed1885e33d66d01571a682c2bd4aef42db9" Aug 13 00:05:40.633451 env[1581]: time="2025-08-13T00:05:40.633420396Z" level=info msg="RemoveContainer for \"8b4ca441f8441235fb98bbfdd7280ed1885e33d66d01571a682c2bd4aef42db9\"" Aug 13 00:05:40.640160 env[1581]: time="2025-08-13T00:05:40.640132611Z" level=info msg="RemoveContainer for \"8b4ca441f8441235fb98bbfdd7280ed1885e33d66d01571a682c2bd4aef42db9\" returns successfully" Aug 13 00:05:40.640394 kubelet[2662]: I0813 00:05:40.640372 2662 scope.go:117] "RemoveContainer" containerID="7ef903890ccf524e6af921b149872b4be3376ceb1a6f70293a4918a3c51b9146" Aug 13 00:05:40.641252 env[1581]: time="2025-08-13T00:05:40.641230894Z" level=info msg="RemoveContainer for \"7ef903890ccf524e6af921b149872b4be3376ceb1a6f70293a4918a3c51b9146\"" Aug 13 00:05:40.647175 env[1581]: time="2025-08-13T00:05:40.647147947Z" level=info msg="RemoveContainer for \"7ef903890ccf524e6af921b149872b4be3376ceb1a6f70293a4918a3c51b9146\" returns successfully" Aug 13 00:05:40.647997 kubelet[2662]: I0813 00:05:40.647968 2662 scope.go:117] "RemoveContainer" containerID="b03a11575b7e67e7934a539839601f98ab8ee9dbabd0ef797e7d05a7c9cb7dde" Aug 13 00:05:40.648968 env[1581]: time="2025-08-13T00:05:40.648947031Z" level=info msg="RemoveContainer for \"b03a11575b7e67e7934a539839601f98ab8ee9dbabd0ef797e7d05a7c9cb7dde\"" Aug 13 00:05:40.654587 env[1581]: time="2025-08-13T00:05:40.654560443Z" level=info msg="RemoveContainer for \"b03a11575b7e67e7934a539839601f98ab8ee9dbabd0ef797e7d05a7c9cb7dde\" returns successfully" Aug 13 00:05:40.654829 kubelet[2662]: I0813 00:05:40.654795 2662 scope.go:117] "RemoveContainer" containerID="b02c2d127a1fd08e6dea3777ca93d3529ccf8e3a9edd59e62a35739c8b911c4c" Aug 13 00:05:40.655185 env[1581]: time="2025-08-13T00:05:40.655105964Z" level=error msg="ContainerStatus for \"b02c2d127a1fd08e6dea3777ca93d3529ccf8e3a9edd59e62a35739c8b911c4c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b02c2d127a1fd08e6dea3777ca93d3529ccf8e3a9edd59e62a35739c8b911c4c\": not found" Aug 13 00:05:40.655313 kubelet[2662]: E0813 00:05:40.655283 2662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b02c2d127a1fd08e6dea3777ca93d3529ccf8e3a9edd59e62a35739c8b911c4c\": not found" containerID="b02c2d127a1fd08e6dea3777ca93d3529ccf8e3a9edd59e62a35739c8b911c4c" Aug 13 00:05:40.655422 kubelet[2662]: I0813 00:05:40.655328 2662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b02c2d127a1fd08e6dea3777ca93d3529ccf8e3a9edd59e62a35739c8b911c4c"} err="failed to get container status \"b02c2d127a1fd08e6dea3777ca93d3529ccf8e3a9edd59e62a35739c8b911c4c\": rpc error: code = NotFound desc = an error occurred when try to find container \"b02c2d127a1fd08e6dea3777ca93d3529ccf8e3a9edd59e62a35739c8b911c4c\": not found" Aug 13 00:05:40.655461 kubelet[2662]: I0813 00:05:40.655421 2662 scope.go:117] "RemoveContainer" containerID="d3550e64cbd03f6059b6c961715f3010f0e57293ae1582610110477d72c2a8a3" Aug 13 00:05:40.655668 env[1581]: time="2025-08-13T00:05:40.655628045Z" level=error msg="ContainerStatus for \"d3550e64cbd03f6059b6c961715f3010f0e57293ae1582610110477d72c2a8a3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d3550e64cbd03f6059b6c961715f3010f0e57293ae1582610110477d72c2a8a3\": not found" Aug 13 00:05:40.655833 kubelet[2662]: E0813 00:05:40.655813 2662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d3550e64cbd03f6059b6c961715f3010f0e57293ae1582610110477d72c2a8a3\": not found" containerID="d3550e64cbd03f6059b6c961715f3010f0e57293ae1582610110477d72c2a8a3" Aug 13 00:05:40.655899 kubelet[2662]: I0813 00:05:40.655835 2662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d3550e64cbd03f6059b6c961715f3010f0e57293ae1582610110477d72c2a8a3"} err="failed to get container status \"d3550e64cbd03f6059b6c961715f3010f0e57293ae1582610110477d72c2a8a3\": rpc error: code = NotFound desc = an error occurred when try to find container \"d3550e64cbd03f6059b6c961715f3010f0e57293ae1582610110477d72c2a8a3\": not found" Aug 13 00:05:40.655899 kubelet[2662]: I0813 00:05:40.655849 2662 scope.go:117] "RemoveContainer" containerID="8b4ca441f8441235fb98bbfdd7280ed1885e33d66d01571a682c2bd4aef42db9" Aug 13 00:05:40.656167 env[1581]: time="2025-08-13T00:05:40.656119127Z" level=error msg="ContainerStatus for \"8b4ca441f8441235fb98bbfdd7280ed1885e33d66d01571a682c2bd4aef42db9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8b4ca441f8441235fb98bbfdd7280ed1885e33d66d01571a682c2bd4aef42db9\": not found" Aug 13 00:05:40.656390 kubelet[2662]: E0813 00:05:40.656288 2662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8b4ca441f8441235fb98bbfdd7280ed1885e33d66d01571a682c2bd4aef42db9\": not found" containerID="8b4ca441f8441235fb98bbfdd7280ed1885e33d66d01571a682c2bd4aef42db9" Aug 13 00:05:40.656390 kubelet[2662]: I0813 00:05:40.656315 2662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8b4ca441f8441235fb98bbfdd7280ed1885e33d66d01571a682c2bd4aef42db9"} err="failed to get container status \"8b4ca441f8441235fb98bbfdd7280ed1885e33d66d01571a682c2bd4aef42db9\": rpc error: code = NotFound desc = an error occurred when try to find container \"8b4ca441f8441235fb98bbfdd7280ed1885e33d66d01571a682c2bd4aef42db9\": not found" Aug 13 00:05:40.656390 kubelet[2662]: I0813 00:05:40.656333 2662 scope.go:117] "RemoveContainer" containerID="7ef903890ccf524e6af921b149872b4be3376ceb1a6f70293a4918a3c51b9146" Aug 13 00:05:40.656634 env[1581]: time="2025-08-13T00:05:40.656593168Z" level=error msg="ContainerStatus for \"7ef903890ccf524e6af921b149872b4be3376ceb1a6f70293a4918a3c51b9146\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7ef903890ccf524e6af921b149872b4be3376ceb1a6f70293a4918a3c51b9146\": not found" Aug 13 00:05:40.656814 kubelet[2662]: E0813 00:05:40.656785 2662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7ef903890ccf524e6af921b149872b4be3376ceb1a6f70293a4918a3c51b9146\": not found" containerID="7ef903890ccf524e6af921b149872b4be3376ceb1a6f70293a4918a3c51b9146" Aug 13 00:05:40.656893 kubelet[2662]: I0813 00:05:40.656841 2662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7ef903890ccf524e6af921b149872b4be3376ceb1a6f70293a4918a3c51b9146"} err="failed to get container status \"7ef903890ccf524e6af921b149872b4be3376ceb1a6f70293a4918a3c51b9146\": rpc error: code = NotFound desc = an error occurred when try to find container \"7ef903890ccf524e6af921b149872b4be3376ceb1a6f70293a4918a3c51b9146\": not found" Aug 13 00:05:40.656893 kubelet[2662]: I0813 00:05:40.656857 2662 scope.go:117] "RemoveContainer" containerID="b03a11575b7e67e7934a539839601f98ab8ee9dbabd0ef797e7d05a7c9cb7dde" Aug 13 00:05:40.657142 env[1581]: time="2025-08-13T00:05:40.657094209Z" level=error msg="ContainerStatus for \"b03a11575b7e67e7934a539839601f98ab8ee9dbabd0ef797e7d05a7c9cb7dde\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b03a11575b7e67e7934a539839601f98ab8ee9dbabd0ef797e7d05a7c9cb7dde\": not found" Aug 13 00:05:40.657310 kubelet[2662]: E0813 00:05:40.657290 2662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b03a11575b7e67e7934a539839601f98ab8ee9dbabd0ef797e7d05a7c9cb7dde\": not found" containerID="b03a11575b7e67e7934a539839601f98ab8ee9dbabd0ef797e7d05a7c9cb7dde" Aug 13 00:05:40.657373 kubelet[2662]: I0813 00:05:40.657313 2662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b03a11575b7e67e7934a539839601f98ab8ee9dbabd0ef797e7d05a7c9cb7dde"} err="failed to get container status \"b03a11575b7e67e7934a539839601f98ab8ee9dbabd0ef797e7d05a7c9cb7dde\": rpc error: code = NotFound desc = an error occurred when try to find container \"b03a11575b7e67e7934a539839601f98ab8ee9dbabd0ef797e7d05a7c9cb7dde\": not found" Aug 13 00:05:40.657373 kubelet[2662]: I0813 00:05:40.657336 2662 scope.go:117] "RemoveContainer" containerID="9d428362348a1400f27cc93761018bb80e9ee3b2effeb4286ece13c1731918ed" Aug 13 00:05:40.658418 env[1581]: time="2025-08-13T00:05:40.658375092Z" level=info msg="RemoveContainer for \"9d428362348a1400f27cc93761018bb80e9ee3b2effeb4286ece13c1731918ed\"" Aug 13 00:05:40.665194 env[1581]: time="2025-08-13T00:05:40.665166387Z" level=info msg="RemoveContainer for \"9d428362348a1400f27cc93761018bb80e9ee3b2effeb4286ece13c1731918ed\" returns successfully" Aug 13 00:05:40.665489 kubelet[2662]: I0813 00:05:40.665399 2662 scope.go:117] "RemoveContainer" containerID="9d428362348a1400f27cc93761018bb80e9ee3b2effeb4286ece13c1731918ed" Aug 13 00:05:40.665743 env[1581]: time="2025-08-13T00:05:40.665689868Z" level=error msg="ContainerStatus for \"9d428362348a1400f27cc93761018bb80e9ee3b2effeb4286ece13c1731918ed\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9d428362348a1400f27cc93761018bb80e9ee3b2effeb4286ece13c1731918ed\": not found" Aug 13 00:05:40.665859 kubelet[2662]: E0813 00:05:40.665835 2662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9d428362348a1400f27cc93761018bb80e9ee3b2effeb4286ece13c1731918ed\": not found" containerID="9d428362348a1400f27cc93761018bb80e9ee3b2effeb4286ece13c1731918ed" Aug 13 00:05:40.665982 kubelet[2662]: I0813 00:05:40.665955 2662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9d428362348a1400f27cc93761018bb80e9ee3b2effeb4286ece13c1731918ed"} err="failed to get container status \"9d428362348a1400f27cc93761018bb80e9ee3b2effeb4286ece13c1731918ed\": rpc error: code = NotFound desc = an error occurred when try to find container \"9d428362348a1400f27cc93761018bb80e9ee3b2effeb4286ece13c1731918ed\": not found" Aug 13 00:05:40.800036 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-02ddcb7a190243947389ad26948f292405dc4244ab9054d40b23d154a87115c2-rootfs.mount: Deactivated successfully. Aug 13 00:05:40.800192 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-02ddcb7a190243947389ad26948f292405dc4244ab9054d40b23d154a87115c2-shm.mount: Deactivated successfully. Aug 13 00:05:40.800285 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dd4c7dcb94ea4b60d3e1a1885be8be5ac1379c28fed610c23ece2d3a10da5e38-rootfs.mount: Deactivated successfully. Aug 13 00:05:40.800364 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dd4c7dcb94ea4b60d3e1a1885be8be5ac1379c28fed610c23ece2d3a10da5e38-shm.mount: Deactivated successfully. Aug 13 00:05:40.800437 systemd[1]: var-lib-kubelet-pods-724fbe26\x2d40f5\x2d4a64\x2d8cdb\x2db7ada888e4cf-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7fgs5.mount: Deactivated successfully. Aug 13 00:05:40.800518 systemd[1]: var-lib-kubelet-pods-f7607543\x2d368e\x2d4809\x2d997d\x2d75a32727f91e-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Aug 13 00:05:40.800596 systemd[1]: var-lib-kubelet-pods-f7607543\x2d368e\x2d4809\x2d997d\x2d75a32727f91e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dw5sln.mount: Deactivated successfully. Aug 13 00:05:40.800674 systemd[1]: var-lib-kubelet-pods-f7607543\x2d368e\x2d4809\x2d997d\x2d75a32727f91e-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Aug 13 00:05:41.812797 sshd[4209]: pam_unix(sshd:session): session closed for user core Aug 13 00:05:41.816017 systemd-logind[1562]: Session 24 logged out. Waiting for processes to exit. Aug 13 00:05:41.816209 systemd[1]: sshd@21-10.200.20.38:22-10.200.16.10:38006.service: Deactivated successfully. Aug 13 00:05:41.817001 systemd[1]: session-24.scope: Deactivated successfully. Aug 13 00:05:41.817421 systemd-logind[1562]: Removed session 24. Aug 13 00:05:41.889549 systemd[1]: Started sshd@22-10.200.20.38:22-10.200.16.10:50800.service. Aug 13 00:05:42.248226 kubelet[2662]: I0813 00:05:42.248194 2662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="724fbe26-40f5-4a64-8cdb-b7ada888e4cf" path="/var/lib/kubelet/pods/724fbe26-40f5-4a64-8cdb-b7ada888e4cf/volumes" Aug 13 00:05:42.249036 kubelet[2662]: I0813 00:05:42.249017 2662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7607543-368e-4809-997d-75a32727f91e" path="/var/lib/kubelet/pods/f7607543-368e-4809-997d-75a32727f91e/volumes" Aug 13 00:05:42.362091 sshd[4383]: Accepted publickey for core from 10.200.16.10 port 50800 ssh2: RSA SHA256:J/je3NSfm2Jr+TQ4JtJfZPKSEEtI0uL9aC1/9TbPR4M Aug 13 00:05:42.363412 sshd[4383]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:05:42.367817 systemd[1]: Started session-25.scope. Aug 13 00:05:42.368770 systemd-logind[1562]: New session 25 of user core. Aug 13 00:05:43.648100 kubelet[2662]: I0813 00:05:43.648041 2662 setters.go:600] "Node became not ready" node="ci-3510.3.8-a-af9fafecff" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T00:05:43Z","lastTransitionTime":"2025-08-13T00:05:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Aug 13 00:05:45.250817 kubelet[2662]: E0813 00:05:45.250771 2662 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f7607543-368e-4809-997d-75a32727f91e" containerName="mount-cgroup" Aug 13 00:05:45.251228 kubelet[2662]: E0813 00:05:45.251213 2662 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f7607543-368e-4809-997d-75a32727f91e" containerName="apply-sysctl-overwrites" Aug 13 00:05:45.251312 kubelet[2662]: E0813 00:05:45.251303 2662 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="724fbe26-40f5-4a64-8cdb-b7ada888e4cf" containerName="cilium-operator" Aug 13 00:05:45.251373 kubelet[2662]: E0813 00:05:45.251358 2662 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f7607543-368e-4809-997d-75a32727f91e" containerName="clean-cilium-state" Aug 13 00:05:45.251428 kubelet[2662]: E0813 00:05:45.251419 2662 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f7607543-368e-4809-997d-75a32727f91e" containerName="cilium-agent" Aug 13 00:05:45.251475 kubelet[2662]: E0813 00:05:45.251466 2662 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f7607543-368e-4809-997d-75a32727f91e" containerName="mount-bpf-fs" Aug 13 00:05:45.251660 kubelet[2662]: I0813 00:05:45.251646 2662 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7607543-368e-4809-997d-75a32727f91e" containerName="cilium-agent" Aug 13 00:05:45.251742 kubelet[2662]: I0813 00:05:45.251732 2662 memory_manager.go:354] "RemoveStaleState removing state" podUID="724fbe26-40f5-4a64-8cdb-b7ada888e4cf" containerName="cilium-operator" Aug 13 00:05:45.324996 sshd[4383]: pam_unix(sshd:session): session closed for user core Aug 13 00:05:45.327325 systemd[1]: sshd@22-10.200.20.38:22-10.200.16.10:50800.service: Deactivated successfully. Aug 13 00:05:45.328318 systemd[1]: session-25.scope: Deactivated successfully. Aug 13 00:05:45.328615 systemd-logind[1562]: Session 25 logged out. Waiting for processes to exit. Aug 13 00:05:45.329374 systemd-logind[1562]: Removed session 25. Aug 13 00:05:45.342501 kubelet[2662]: E0813 00:05:45.342471 2662 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 13 00:05:45.361983 kubelet[2662]: I0813 00:05:45.361955 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838-cni-path\") pod \"cilium-2jdll\" (UID: \"3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838\") " pod="kube-system/cilium-2jdll" Aug 13 00:05:45.362155 kubelet[2662]: I0813 00:05:45.362139 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838-lib-modules\") pod \"cilium-2jdll\" (UID: \"3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838\") " pod="kube-system/cilium-2jdll" Aug 13 00:05:45.362240 kubelet[2662]: I0813 00:05:45.362225 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838-xtables-lock\") pod \"cilium-2jdll\" (UID: \"3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838\") " pod="kube-system/cilium-2jdll" Aug 13 00:05:45.362322 kubelet[2662]: I0813 00:05:45.362310 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838-cilium-config-path\") pod \"cilium-2jdll\" (UID: \"3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838\") " pod="kube-system/cilium-2jdll" Aug 13 00:05:45.362405 kubelet[2662]: I0813 00:05:45.362392 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838-host-proc-sys-net\") pod \"cilium-2jdll\" (UID: \"3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838\") " pod="kube-system/cilium-2jdll" Aug 13 00:05:45.362478 kubelet[2662]: I0813 00:05:45.362467 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838-hubble-tls\") pod \"cilium-2jdll\" (UID: \"3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838\") " pod="kube-system/cilium-2jdll" Aug 13 00:05:45.362557 kubelet[2662]: I0813 00:05:45.362545 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838-bpf-maps\") pod \"cilium-2jdll\" (UID: \"3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838\") " pod="kube-system/cilium-2jdll" Aug 13 00:05:45.362630 kubelet[2662]: I0813 00:05:45.362617 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838-hostproc\") pod \"cilium-2jdll\" (UID: \"3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838\") " pod="kube-system/cilium-2jdll" Aug 13 00:05:45.362722 kubelet[2662]: I0813 00:05:45.362709 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838-etc-cni-netd\") pod \"cilium-2jdll\" (UID: \"3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838\") " pod="kube-system/cilium-2jdll" Aug 13 00:05:45.362809 kubelet[2662]: I0813 00:05:45.362793 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6rfd\" (UniqueName: \"kubernetes.io/projected/3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838-kube-api-access-j6rfd\") pod \"cilium-2jdll\" (UID: \"3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838\") " pod="kube-system/cilium-2jdll" Aug 13 00:05:45.362910 kubelet[2662]: I0813 00:05:45.362897 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838-cilium-ipsec-secrets\") pod \"cilium-2jdll\" (UID: \"3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838\") " pod="kube-system/cilium-2jdll" Aug 13 00:05:45.363014 kubelet[2662]: I0813 00:05:45.362998 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838-clustermesh-secrets\") pod \"cilium-2jdll\" (UID: \"3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838\") " pod="kube-system/cilium-2jdll" Aug 13 00:05:45.363100 kubelet[2662]: I0813 00:05:45.363087 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838-cilium-run\") pod \"cilium-2jdll\" (UID: \"3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838\") " pod="kube-system/cilium-2jdll" Aug 13 00:05:45.363175 kubelet[2662]: I0813 00:05:45.363163 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838-host-proc-sys-kernel\") pod \"cilium-2jdll\" (UID: \"3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838\") " pod="kube-system/cilium-2jdll" Aug 13 00:05:45.363295 kubelet[2662]: I0813 00:05:45.363260 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838-cilium-cgroup\") pod \"cilium-2jdll\" (UID: \"3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838\") " pod="kube-system/cilium-2jdll" Aug 13 00:05:45.401614 systemd[1]: Started sshd@23-10.200.20.38:22-10.200.16.10:50802.service. Aug 13 00:05:45.560524 env[1581]: time="2025-08-13T00:05:45.560079349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2jdll,Uid:3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838,Namespace:kube-system,Attempt:0,}" Aug 13 00:05:45.587417 env[1581]: time="2025-08-13T00:05:45.587336598Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:05:45.587572 env[1581]: time="2025-08-13T00:05:45.587389358Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:05:45.587572 env[1581]: time="2025-08-13T00:05:45.587415078Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:05:45.587740 env[1581]: time="2025-08-13T00:05:45.587703398Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fe1ce9b4f8f3b57898a305f411dc4093b86767db4cca211f628a257fb2ff3bb4 pid=4409 runtime=io.containerd.runc.v2 Aug 13 00:05:45.620943 env[1581]: time="2025-08-13T00:05:45.620855338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2jdll,Uid:3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838,Namespace:kube-system,Attempt:0,} returns sandbox id \"fe1ce9b4f8f3b57898a305f411dc4093b86767db4cca211f628a257fb2ff3bb4\"" Aug 13 00:05:45.624975 env[1581]: time="2025-08-13T00:05:45.624942065Z" level=info msg="CreateContainer within sandbox \"fe1ce9b4f8f3b57898a305f411dc4093b86767db4cca211f628a257fb2ff3bb4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 00:05:45.649823 env[1581]: time="2025-08-13T00:05:45.649785269Z" level=info msg="CreateContainer within sandbox \"fe1ce9b4f8f3b57898a305f411dc4093b86767db4cca211f628a257fb2ff3bb4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"29683aeb3ae63ceee2ddddb5c1a13209e83627d6c1d896b26dc56d05d5a1296a\"" Aug 13 00:05:45.651049 env[1581]: time="2025-08-13T00:05:45.651023352Z" level=info msg="StartContainer for \"29683aeb3ae63ceee2ddddb5c1a13209e83627d6c1d896b26dc56d05d5a1296a\"" Aug 13 00:05:45.696549 env[1581]: time="2025-08-13T00:05:45.696510593Z" level=info msg="StartContainer for \"29683aeb3ae63ceee2ddddb5c1a13209e83627d6c1d896b26dc56d05d5a1296a\" returns successfully" Aug 13 00:05:45.754102 env[1581]: time="2025-08-13T00:05:45.754051936Z" level=info msg="shim disconnected" id=29683aeb3ae63ceee2ddddb5c1a13209e83627d6c1d896b26dc56d05d5a1296a Aug 13 00:05:45.754102 env[1581]: time="2025-08-13T00:05:45.754100536Z" level=warning msg="cleaning up after shim disconnected" id=29683aeb3ae63ceee2ddddb5c1a13209e83627d6c1d896b26dc56d05d5a1296a namespace=k8s.io Aug 13 00:05:45.754102 env[1581]: time="2025-08-13T00:05:45.754110696Z" level=info msg="cleaning up dead shim" Aug 13 00:05:45.760559 env[1581]: time="2025-08-13T00:05:45.760515908Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:05:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4491 runtime=io.containerd.runc.v2\n" Aug 13 00:05:45.874860 sshd[4394]: Accepted publickey for core from 10.200.16.10 port 50802 ssh2: RSA SHA256:J/je3NSfm2Jr+TQ4JtJfZPKSEEtI0uL9aC1/9TbPR4M Aug 13 00:05:45.876257 sshd[4394]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:05:45.880799 systemd[1]: Started session-26.scope. Aug 13 00:05:45.881678 systemd-logind[1562]: New session 26 of user core. Aug 13 00:05:46.299100 sshd[4394]: pam_unix(sshd:session): session closed for user core Aug 13 00:05:46.301929 systemd-logind[1562]: Session 26 logged out. Waiting for processes to exit. Aug 13 00:05:46.302069 systemd[1]: sshd@23-10.200.20.38:22-10.200.16.10:50802.service: Deactivated successfully. Aug 13 00:05:46.302862 systemd[1]: session-26.scope: Deactivated successfully. Aug 13 00:05:46.303297 systemd-logind[1562]: Removed session 26. Aug 13 00:05:46.375659 systemd[1]: Started sshd@24-10.200.20.38:22-10.200.16.10:50812.service. Aug 13 00:05:46.611498 env[1581]: time="2025-08-13T00:05:46.611464861Z" level=info msg="StopPodSandbox for \"fe1ce9b4f8f3b57898a305f411dc4093b86767db4cca211f628a257fb2ff3bb4\"" Aug 13 00:05:46.611933 env[1581]: time="2025-08-13T00:05:46.611909622Z" level=info msg="Container to stop \"29683aeb3ae63ceee2ddddb5c1a13209e83627d6c1d896b26dc56d05d5a1296a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:05:46.615467 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fe1ce9b4f8f3b57898a305f411dc4093b86767db4cca211f628a257fb2ff3bb4-shm.mount: Deactivated successfully. Aug 13 00:05:46.648546 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fe1ce9b4f8f3b57898a305f411dc4093b86767db4cca211f628a257fb2ff3bb4-rootfs.mount: Deactivated successfully. Aug 13 00:05:46.660831 env[1581]: time="2025-08-13T00:05:46.660786466Z" level=info msg="shim disconnected" id=fe1ce9b4f8f3b57898a305f411dc4093b86767db4cca211f628a257fb2ff3bb4 Aug 13 00:05:46.661047 env[1581]: time="2025-08-13T00:05:46.661028666Z" level=warning msg="cleaning up after shim disconnected" id=fe1ce9b4f8f3b57898a305f411dc4093b86767db4cca211f628a257fb2ff3bb4 namespace=k8s.io Aug 13 00:05:46.661109 env[1581]: time="2025-08-13T00:05:46.661095346Z" level=info msg="cleaning up dead shim" Aug 13 00:05:46.667676 env[1581]: time="2025-08-13T00:05:46.667643197Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:05:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4537 runtime=io.containerd.runc.v2\n" Aug 13 00:05:46.668205 env[1581]: time="2025-08-13T00:05:46.668176798Z" level=info msg="TearDown network for sandbox \"fe1ce9b4f8f3b57898a305f411dc4093b86767db4cca211f628a257fb2ff3bb4\" successfully" Aug 13 00:05:46.668305 env[1581]: time="2025-08-13T00:05:46.668287798Z" level=info msg="StopPodSandbox for \"fe1ce9b4f8f3b57898a305f411dc4093b86767db4cca211f628a257fb2ff3bb4\" returns successfully" Aug 13 00:05:46.774580 kubelet[2662]: I0813 00:05:46.774050 2662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838-cilium-config-path\") pod \"3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838\" (UID: \"3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838\") " Aug 13 00:05:46.774580 kubelet[2662]: I0813 00:05:46.774089 2662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838-hubble-tls\") pod \"3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838\" (UID: \"3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838\") " Aug 13 00:05:46.774580 kubelet[2662]: I0813 00:05:46.774118 2662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838-hostproc\") pod \"3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838\" (UID: \"3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838\") " Aug 13 00:05:46.774580 kubelet[2662]: I0813 00:05:46.774134 2662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838-cilium-cgroup\") pod \"3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838\" (UID: \"3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838\") " Aug 13 00:05:46.774580 kubelet[2662]: I0813 00:05:46.774150 2662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838-cni-path\") pod \"3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838\" (UID: \"3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838\") " Aug 13 00:05:46.774580 kubelet[2662]: I0813 00:05:46.774165 2662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838-host-proc-sys-kernel\") pod \"3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838\" (UID: \"3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838\") " Aug 13 00:05:46.775407 kubelet[2662]: I0813 00:05:46.774190 2662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838-host-proc-sys-net\") pod \"3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838\" (UID: \"3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838\") " Aug 13 00:05:46.775407 kubelet[2662]: I0813 00:05:46.774205 2662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838-bpf-maps\") pod \"3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838\" (UID: \"3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838\") " Aug 13 00:05:46.775407 kubelet[2662]: I0813 00:05:46.774218 2662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838-etc-cni-netd\") pod \"3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838\" (UID: \"3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838\") " Aug 13 00:05:46.775407 kubelet[2662]: I0813 00:05:46.774235 2662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6rfd\" (UniqueName: \"kubernetes.io/projected/3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838-kube-api-access-j6rfd\") pod \"3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838\" (UID: \"3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838\") " Aug 13 00:05:46.775407 kubelet[2662]: I0813 00:05:46.774252 2662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838-cilium-ipsec-secrets\") pod \"3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838\" (UID: \"3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838\") " Aug 13 00:05:46.775407 kubelet[2662]: I0813 00:05:46.774279 2662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838-clustermesh-secrets\") pod \"3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838\" (UID: \"3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838\") " Aug 13 00:05:46.775545 kubelet[2662]: I0813 00:05:46.774296 2662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838-xtables-lock\") pod \"3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838\" (UID: \"3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838\") " Aug 13 00:05:46.775545 kubelet[2662]: I0813 00:05:46.774311 2662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838-lib-modules\") pod \"3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838\" (UID: \"3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838\") " Aug 13 00:05:46.775545 kubelet[2662]: I0813 00:05:46.774324 2662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838-cilium-run\") pod \"3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838\" (UID: \"3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838\") " Aug 13 00:05:46.775545 kubelet[2662]: I0813 00:05:46.774380 2662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838" (UID: "3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:05:46.775545 kubelet[2662]: I0813 00:05:46.774445 2662 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838-host-proc-sys-net\") on node \"ci-3510.3.8-a-af9fafecff\" DevicePath \"\"" Aug 13 00:05:46.775545 kubelet[2662]: I0813 00:05:46.774466 2662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838" (UID: "3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:05:46.775676 kubelet[2662]: I0813 00:05:46.774481 2662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838" (UID: "3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:05:46.775676 kubelet[2662]: I0813 00:05:46.774964 2662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838" (UID: "3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:05:46.776133 kubelet[2662]: I0813 00:05:46.776096 2662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838-hostproc" (OuterVolumeSpecName: "hostproc") pod "3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838" (UID: "3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:05:46.776211 kubelet[2662]: I0813 00:05:46.776135 2662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838" (UID: "3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:05:46.776211 kubelet[2662]: I0813 00:05:46.776151 2662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838-cni-path" (OuterVolumeSpecName: "cni-path") pod "3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838" (UID: "3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:05:46.776211 kubelet[2662]: I0813 00:05:46.776164 2662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838" (UID: "3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:05:46.778056 kubelet[2662]: I0813 00:05:46.778022 2662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838" (UID: "3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 00:05:46.778194 kubelet[2662]: I0813 00:05:46.778170 2662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838" (UID: "3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:05:46.778248 kubelet[2662]: I0813 00:05:46.778196 2662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838" (UID: "3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 00:05:46.780282 systemd[1]: var-lib-kubelet-pods-3c66a1f8\x2d3e5b\x2d4bfd\x2db2c0\x2dba5dd42e7838-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Aug 13 00:05:46.783000 kubelet[2662]: I0813 00:05:46.782975 2662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838" (UID: "3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 00:05:46.784586 systemd[1]: var-lib-kubelet-pods-3c66a1f8\x2d3e5b\x2d4bfd\x2db2c0\x2dba5dd42e7838-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dj6rfd.mount: Deactivated successfully. Aug 13 00:05:46.787163 kubelet[2662]: I0813 00:05:46.785964 2662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838-kube-api-access-j6rfd" (OuterVolumeSpecName: "kube-api-access-j6rfd") pod "3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838" (UID: "3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838"). InnerVolumeSpecName "kube-api-access-j6rfd". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 00:05:46.787163 kubelet[2662]: I0813 00:05:46.786504 2662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838" (UID: "3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 00:05:46.788286 kubelet[2662]: I0813 00:05:46.788253 2662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838" (UID: "3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 00:05:46.851144 sshd[4514]: Accepted publickey for core from 10.200.16.10 port 50812 ssh2: RSA SHA256:J/je3NSfm2Jr+TQ4JtJfZPKSEEtI0uL9aC1/9TbPR4M Aug 13 00:05:46.851905 sshd[4514]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:05:46.855937 systemd-logind[1562]: New session 27 of user core. Aug 13 00:05:46.856201 systemd[1]: Started session-27.scope. Aug 13 00:05:46.875914 kubelet[2662]: I0813 00:05:46.874737 2662 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838-cilium-config-path\") on node \"ci-3510.3.8-a-af9fafecff\" DevicePath \"\"" Aug 13 00:05:46.875914 kubelet[2662]: I0813 00:05:46.874817 2662 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838-hubble-tls\") on node \"ci-3510.3.8-a-af9fafecff\" DevicePath \"\"" Aug 13 00:05:46.875914 kubelet[2662]: I0813 00:05:46.874835 2662 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838-hostproc\") on node \"ci-3510.3.8-a-af9fafecff\" DevicePath \"\"" Aug 13 00:05:46.875914 kubelet[2662]: I0813 00:05:46.874850 2662 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838-cilium-cgroup\") on node \"ci-3510.3.8-a-af9fafecff\" DevicePath \"\"" Aug 13 00:05:46.875914 kubelet[2662]: I0813 00:05:46.874864 2662 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838-cni-path\") on node \"ci-3510.3.8-a-af9fafecff\" DevicePath \"\"" Aug 13 00:05:46.875914 kubelet[2662]: I0813 00:05:46.874886 2662 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838-host-proc-sys-kernel\") on node \"ci-3510.3.8-a-af9fafecff\" DevicePath \"\"" Aug 13 00:05:46.875914 kubelet[2662]: I0813 00:05:46.874897 2662 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838-clustermesh-secrets\") on node \"ci-3510.3.8-a-af9fafecff\" DevicePath \"\"" Aug 13 00:05:46.875914 kubelet[2662]: I0813 00:05:46.874906 2662 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838-bpf-maps\") on node \"ci-3510.3.8-a-af9fafecff\" DevicePath \"\"" Aug 13 00:05:46.876187 kubelet[2662]: I0813 00:05:46.874915 2662 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838-etc-cni-netd\") on node \"ci-3510.3.8-a-af9fafecff\" DevicePath \"\"" Aug 13 00:05:46.876187 kubelet[2662]: I0813 00:05:46.874928 2662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j6rfd\" (UniqueName: \"kubernetes.io/projected/3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838-kube-api-access-j6rfd\") on node \"ci-3510.3.8-a-af9fafecff\" DevicePath \"\"" Aug 13 00:05:46.876187 kubelet[2662]: I0813 00:05:46.874936 2662 reconciler_common.go:293] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838-cilium-ipsec-secrets\") on node \"ci-3510.3.8-a-af9fafecff\" DevicePath \"\"" Aug 13 00:05:46.876187 kubelet[2662]: I0813 00:05:46.874944 2662 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838-xtables-lock\") on node \"ci-3510.3.8-a-af9fafecff\" DevicePath \"\"" Aug 13 00:05:46.876187 kubelet[2662]: I0813 00:05:46.874953 2662 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838-lib-modules\") on node \"ci-3510.3.8-a-af9fafecff\" DevicePath \"\"" Aug 13 00:05:46.876187 kubelet[2662]: I0813 00:05:46.874961 2662 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838-cilium-run\") on node \"ci-3510.3.8-a-af9fafecff\" DevicePath \"\"" Aug 13 00:05:47.471120 systemd[1]: var-lib-kubelet-pods-3c66a1f8\x2d3e5b\x2d4bfd\x2db2c0\x2dba5dd42e7838-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Aug 13 00:05:47.471266 systemd[1]: var-lib-kubelet-pods-3c66a1f8\x2d3e5b\x2d4bfd\x2db2c0\x2dba5dd42e7838-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Aug 13 00:05:47.614362 kubelet[2662]: I0813 00:05:47.614339 2662 scope.go:117] "RemoveContainer" containerID="29683aeb3ae63ceee2ddddb5c1a13209e83627d6c1d896b26dc56d05d5a1296a" Aug 13 00:05:47.616211 env[1581]: time="2025-08-13T00:05:47.615909410Z" level=info msg="RemoveContainer for \"29683aeb3ae63ceee2ddddb5c1a13209e83627d6c1d896b26dc56d05d5a1296a\"" Aug 13 00:05:47.623251 env[1581]: time="2025-08-13T00:05:47.623165822Z" level=info msg="RemoveContainer for \"29683aeb3ae63ceee2ddddb5c1a13209e83627d6c1d896b26dc56d05d5a1296a\" returns successfully" Aug 13 00:05:47.679367 kubelet[2662]: E0813 00:05:47.679314 2662 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838" containerName="mount-cgroup" Aug 13 00:05:47.679517 kubelet[2662]: I0813 00:05:47.679378 2662 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838" containerName="mount-cgroup" Aug 13 00:05:47.780296 kubelet[2662]: I0813 00:05:47.780190 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c5179d08-df4d-465b-bed2-13cad97e0362-cilium-config-path\") pod \"cilium-q74pz\" (UID: \"c5179d08-df4d-465b-bed2-13cad97e0362\") " pod="kube-system/cilium-q74pz" Aug 13 00:05:47.780687 kubelet[2662]: I0813 00:05:47.780667 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c5179d08-df4d-465b-bed2-13cad97e0362-hubble-tls\") pod \"cilium-q74pz\" (UID: \"c5179d08-df4d-465b-bed2-13cad97e0362\") " pod="kube-system/cilium-q74pz" Aug 13 00:05:47.780783 kubelet[2662]: I0813 00:05:47.780770 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c5179d08-df4d-465b-bed2-13cad97e0362-host-proc-sys-kernel\") pod \"cilium-q74pz\" (UID: \"c5179d08-df4d-465b-bed2-13cad97e0362\") " pod="kube-system/cilium-q74pz" Aug 13 00:05:47.780860 kubelet[2662]: I0813 00:05:47.780848 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c5179d08-df4d-465b-bed2-13cad97e0362-xtables-lock\") pod \"cilium-q74pz\" (UID: \"c5179d08-df4d-465b-bed2-13cad97e0362\") " pod="kube-system/cilium-q74pz" Aug 13 00:05:47.780991 kubelet[2662]: I0813 00:05:47.780976 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c5179d08-df4d-465b-bed2-13cad97e0362-cni-path\") pod \"cilium-q74pz\" (UID: \"c5179d08-df4d-465b-bed2-13cad97e0362\") " pod="kube-system/cilium-q74pz" Aug 13 00:05:47.781074 kubelet[2662]: I0813 00:05:47.781062 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c5179d08-df4d-465b-bed2-13cad97e0362-lib-modules\") pod \"cilium-q74pz\" (UID: \"c5179d08-df4d-465b-bed2-13cad97e0362\") " pod="kube-system/cilium-q74pz" Aug 13 00:05:47.781151 kubelet[2662]: I0813 00:05:47.781139 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c5179d08-df4d-465b-bed2-13cad97e0362-clustermesh-secrets\") pod \"cilium-q74pz\" (UID: \"c5179d08-df4d-465b-bed2-13cad97e0362\") " pod="kube-system/cilium-q74pz" Aug 13 00:05:47.781232 kubelet[2662]: I0813 00:05:47.781218 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c5179d08-df4d-465b-bed2-13cad97e0362-cilium-ipsec-secrets\") pod \"cilium-q74pz\" (UID: \"c5179d08-df4d-465b-bed2-13cad97e0362\") " pod="kube-system/cilium-q74pz" Aug 13 00:05:47.781310 kubelet[2662]: I0813 00:05:47.781298 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bzfh\" (UniqueName: \"kubernetes.io/projected/c5179d08-df4d-465b-bed2-13cad97e0362-kube-api-access-8bzfh\") pod \"cilium-q74pz\" (UID: \"c5179d08-df4d-465b-bed2-13cad97e0362\") " pod="kube-system/cilium-q74pz" Aug 13 00:05:47.781389 kubelet[2662]: I0813 00:05:47.781378 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c5179d08-df4d-465b-bed2-13cad97e0362-bpf-maps\") pod \"cilium-q74pz\" (UID: \"c5179d08-df4d-465b-bed2-13cad97e0362\") " pod="kube-system/cilium-q74pz" Aug 13 00:05:47.781469 kubelet[2662]: I0813 00:05:47.781457 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c5179d08-df4d-465b-bed2-13cad97e0362-host-proc-sys-net\") pod \"cilium-q74pz\" (UID: \"c5179d08-df4d-465b-bed2-13cad97e0362\") " pod="kube-system/cilium-q74pz" Aug 13 00:05:47.781548 kubelet[2662]: I0813 00:05:47.781536 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c5179d08-df4d-465b-bed2-13cad97e0362-hostproc\") pod \"cilium-q74pz\" (UID: \"c5179d08-df4d-465b-bed2-13cad97e0362\") " pod="kube-system/cilium-q74pz" Aug 13 00:05:47.781626 kubelet[2662]: I0813 00:05:47.781614 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c5179d08-df4d-465b-bed2-13cad97e0362-cilium-cgroup\") pod \"cilium-q74pz\" (UID: \"c5179d08-df4d-465b-bed2-13cad97e0362\") " pod="kube-system/cilium-q74pz" Aug 13 00:05:47.781701 kubelet[2662]: I0813 00:05:47.781689 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c5179d08-df4d-465b-bed2-13cad97e0362-cilium-run\") pod \"cilium-q74pz\" (UID: \"c5179d08-df4d-465b-bed2-13cad97e0362\") " pod="kube-system/cilium-q74pz" Aug 13 00:05:47.781783 kubelet[2662]: I0813 00:05:47.781771 2662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c5179d08-df4d-465b-bed2-13cad97e0362-etc-cni-netd\") pod \"cilium-q74pz\" (UID: \"c5179d08-df4d-465b-bed2-13cad97e0362\") " pod="kube-system/cilium-q74pz" Aug 13 00:05:47.984435 env[1581]: time="2025-08-13T00:05:47.984049490Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-q74pz,Uid:c5179d08-df4d-465b-bed2-13cad97e0362,Namespace:kube-system,Attempt:0,}" Aug 13 00:05:48.004931 env[1581]: time="2025-08-13T00:05:48.004828604Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:05:48.004931 env[1581]: time="2025-08-13T00:05:48.004884084Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:05:48.004931 env[1581]: time="2025-08-13T00:05:48.004894924Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:05:48.005440 env[1581]: time="2025-08-13T00:05:48.005286524Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c97cc344e009c6f0dc004c47ecb8131ba03502b8d9f96b286d20eddcf60a60a7 pid=4573 runtime=io.containerd.runc.v2 Aug 13 00:05:48.038867 env[1581]: time="2025-08-13T00:05:48.038476136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-q74pz,Uid:c5179d08-df4d-465b-bed2-13cad97e0362,Namespace:kube-system,Attempt:0,} returns sandbox id \"c97cc344e009c6f0dc004c47ecb8131ba03502b8d9f96b286d20eddcf60a60a7\"" Aug 13 00:05:48.041818 env[1581]: time="2025-08-13T00:05:48.041686461Z" level=info msg="CreateContainer within sandbox \"c97cc344e009c6f0dc004c47ecb8131ba03502b8d9f96b286d20eddcf60a60a7\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 00:05:48.064994 env[1581]: time="2025-08-13T00:05:48.064941217Z" level=info msg="CreateContainer within sandbox \"c97cc344e009c6f0dc004c47ecb8131ba03502b8d9f96b286d20eddcf60a60a7\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2a27e79654f289d2998a632e9232758d65cf64d512e4bbb3387c4a4c36079d45\"" Aug 13 00:05:48.066241 env[1581]: time="2025-08-13T00:05:48.066029859Z" level=info msg="StartContainer for \"2a27e79654f289d2998a632e9232758d65cf64d512e4bbb3387c4a4c36079d45\"" Aug 13 00:05:48.111315 env[1581]: time="2025-08-13T00:05:48.111270249Z" level=info msg="StartContainer for \"2a27e79654f289d2998a632e9232758d65cf64d512e4bbb3387c4a4c36079d45\" returns successfully" Aug 13 00:05:48.147958 env[1581]: time="2025-08-13T00:05:48.147913586Z" level=info msg="shim disconnected" id=2a27e79654f289d2998a632e9232758d65cf64d512e4bbb3387c4a4c36079d45 Aug 13 00:05:48.148226 env[1581]: time="2025-08-13T00:05:48.148197426Z" level=warning msg="cleaning up after shim disconnected" id=2a27e79654f289d2998a632e9232758d65cf64d512e4bbb3387c4a4c36079d45 namespace=k8s.io Aug 13 00:05:48.148306 env[1581]: time="2025-08-13T00:05:48.148292267Z" level=info msg="cleaning up dead shim" Aug 13 00:05:48.155102 env[1581]: time="2025-08-13T00:05:48.155072317Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:05:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4654 runtime=io.containerd.runc.v2\n" Aug 13 00:05:48.248826 kubelet[2662]: I0813 00:05:48.248794 2662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838" path="/var/lib/kubelet/pods/3c66a1f8-3e5b-4bfd-b2c0-ba5dd42e7838/volumes" Aug 13 00:05:48.621322 env[1581]: time="2025-08-13T00:05:48.621282081Z" level=info msg="CreateContainer within sandbox \"c97cc344e009c6f0dc004c47ecb8131ba03502b8d9f96b286d20eddcf60a60a7\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 13 00:05:48.650906 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount766512297.mount: Deactivated successfully. Aug 13 00:05:48.661974 env[1581]: time="2025-08-13T00:05:48.661928544Z" level=info msg="CreateContainer within sandbox \"c97cc344e009c6f0dc004c47ecb8131ba03502b8d9f96b286d20eddcf60a60a7\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"61ddfd1f58abe07bc0c17b4609d73c06d9bf30d27b809ae92338d4bfc7378fb6\"" Aug 13 00:05:48.663847 env[1581]: time="2025-08-13T00:05:48.662411265Z" level=info msg="StartContainer for \"61ddfd1f58abe07bc0c17b4609d73c06d9bf30d27b809ae92338d4bfc7378fb6\"" Aug 13 00:05:48.709230 env[1581]: time="2025-08-13T00:05:48.709194538Z" level=info msg="StartContainer for \"61ddfd1f58abe07bc0c17b4609d73c06d9bf30d27b809ae92338d4bfc7378fb6\" returns successfully" Aug 13 00:05:48.735551 env[1581]: time="2025-08-13T00:05:48.735502499Z" level=info msg="shim disconnected" id=61ddfd1f58abe07bc0c17b4609d73c06d9bf30d27b809ae92338d4bfc7378fb6 Aug 13 00:05:48.735551 env[1581]: time="2025-08-13T00:05:48.735546859Z" level=warning msg="cleaning up after shim disconnected" id=61ddfd1f58abe07bc0c17b4609d73c06d9bf30d27b809ae92338d4bfc7378fb6 namespace=k8s.io Aug 13 00:05:48.735742 env[1581]: time="2025-08-13T00:05:48.735558379Z" level=info msg="cleaning up dead shim" Aug 13 00:05:48.742283 env[1581]: time="2025-08-13T00:05:48.742244589Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:05:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4718 runtime=io.containerd.runc.v2\n" Aug 13 00:05:49.471300 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-61ddfd1f58abe07bc0c17b4609d73c06d9bf30d27b809ae92338d4bfc7378fb6-rootfs.mount: Deactivated successfully. Aug 13 00:05:49.638288 env[1581]: time="2025-08-13T00:05:49.636094209Z" level=info msg="CreateContainer within sandbox \"c97cc344e009c6f0dc004c47ecb8131ba03502b8d9f96b286d20eddcf60a60a7\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 13 00:05:49.662627 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2471358681.mount: Deactivated successfully. Aug 13 00:05:49.672497 env[1581]: time="2025-08-13T00:05:49.672424263Z" level=info msg="CreateContainer within sandbox \"c97cc344e009c6f0dc004c47ecb8131ba03502b8d9f96b286d20eddcf60a60a7\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ecd9a99635351b8fcc1189c9c0be19534ead24dff426326f05ce120cdb05adc4\"" Aug 13 00:05:49.673138 env[1581]: time="2025-08-13T00:05:49.673106824Z" level=info msg="StartContainer for \"ecd9a99635351b8fcc1189c9c0be19534ead24dff426326f05ce120cdb05adc4\"" Aug 13 00:05:49.722466 env[1581]: time="2025-08-13T00:05:49.722108296Z" level=info msg="StartContainer for \"ecd9a99635351b8fcc1189c9c0be19534ead24dff426326f05ce120cdb05adc4\" returns successfully" Aug 13 00:05:49.752140 env[1581]: time="2025-08-13T00:05:49.752089581Z" level=info msg="shim disconnected" id=ecd9a99635351b8fcc1189c9c0be19534ead24dff426326f05ce120cdb05adc4 Aug 13 00:05:49.752140 env[1581]: time="2025-08-13T00:05:49.752133421Z" level=warning msg="cleaning up after shim disconnected" id=ecd9a99635351b8fcc1189c9c0be19534ead24dff426326f05ce120cdb05adc4 namespace=k8s.io Aug 13 00:05:49.752140 env[1581]: time="2025-08-13T00:05:49.752142941Z" level=info msg="cleaning up dead shim" Aug 13 00:05:49.759570 env[1581]: time="2025-08-13T00:05:49.759528071Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:05:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4776 runtime=io.containerd.runc.v2\n" Aug 13 00:05:50.344016 kubelet[2662]: E0813 00:05:50.343959 2662 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 13 00:05:50.633481 env[1581]: time="2025-08-13T00:05:50.633436955Z" level=info msg="CreateContainer within sandbox \"c97cc344e009c6f0dc004c47ecb8131ba03502b8d9f96b286d20eddcf60a60a7\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 13 00:05:50.662574 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2733280250.mount: Deactivated successfully. Aug 13 00:05:50.670090 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount913980405.mount: Deactivated successfully. Aug 13 00:05:50.679810 env[1581]: time="2025-08-13T00:05:50.679771180Z" level=info msg="CreateContainer within sandbox \"c97cc344e009c6f0dc004c47ecb8131ba03502b8d9f96b286d20eddcf60a60a7\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1421642d0aa701f2a13e3fd7df300bdcdda7b8fbfaacc32ac836b67d5e31d79f\"" Aug 13 00:05:50.681451 env[1581]: time="2025-08-13T00:05:50.681425942Z" level=info msg="StartContainer for \"1421642d0aa701f2a13e3fd7df300bdcdda7b8fbfaacc32ac836b67d5e31d79f\"" Aug 13 00:05:50.728364 env[1581]: time="2025-08-13T00:05:50.728313288Z" level=info msg="StartContainer for \"1421642d0aa701f2a13e3fd7df300bdcdda7b8fbfaacc32ac836b67d5e31d79f\" returns successfully" Aug 13 00:05:50.756599 env[1581]: time="2025-08-13T00:05:50.756548728Z" level=info msg="shim disconnected" id=1421642d0aa701f2a13e3fd7df300bdcdda7b8fbfaacc32ac836b67d5e31d79f Aug 13 00:05:50.756599 env[1581]: time="2025-08-13T00:05:50.756596528Z" level=warning msg="cleaning up after shim disconnected" id=1421642d0aa701f2a13e3fd7df300bdcdda7b8fbfaacc32ac836b67d5e31d79f namespace=k8s.io Aug 13 00:05:50.756806 env[1581]: time="2025-08-13T00:05:50.756605808Z" level=info msg="cleaning up dead shim" Aug 13 00:05:50.763262 env[1581]: time="2025-08-13T00:05:50.763225817Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:05:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4832 runtime=io.containerd.runc.v2\n" Aug 13 00:05:51.634518 env[1581]: time="2025-08-13T00:05:51.634478712Z" level=info msg="CreateContainer within sandbox \"c97cc344e009c6f0dc004c47ecb8131ba03502b8d9f96b286d20eddcf60a60a7\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 13 00:05:51.658396 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3924995874.mount: Deactivated successfully. Aug 13 00:05:51.664077 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1689626366.mount: Deactivated successfully. Aug 13 00:05:51.671636 env[1581]: time="2025-08-13T00:05:51.671595641Z" level=info msg="CreateContainer within sandbox \"c97cc344e009c6f0dc004c47ecb8131ba03502b8d9f96b286d20eddcf60a60a7\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6d9ed464afe9d87b816e4a60414b22173724a51e80f0887d5ff752350db3085d\"" Aug 13 00:05:51.672550 env[1581]: time="2025-08-13T00:05:51.672515763Z" level=info msg="StartContainer for \"6d9ed464afe9d87b816e4a60414b22173724a51e80f0887d5ff752350db3085d\"" Aug 13 00:05:51.730228 env[1581]: time="2025-08-13T00:05:51.730000079Z" level=info msg="StartContainer for \"6d9ed464afe9d87b816e4a60414b22173724a51e80f0887d5ff752350db3085d\" returns successfully" Aug 13 00:05:52.216897 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Aug 13 00:05:53.308175 systemd[1]: run-containerd-runc-k8s.io-6d9ed464afe9d87b816e4a60414b22173724a51e80f0887d5ff752350db3085d-runc.aeBRnq.mount: Deactivated successfully. Aug 13 00:05:54.800168 systemd-networkd[1771]: lxc_health: Link UP Aug 13 00:05:54.857915 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Aug 13 00:05:54.857701 systemd-networkd[1771]: lxc_health: Gained carrier Aug 13 00:05:56.005200 kubelet[2662]: I0813 00:05:56.005149 2662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-q74pz" podStartSLOduration=9.005134801 podStartE2EDuration="9.005134801s" podCreationTimestamp="2025-08-13 00:05:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:05:52.653663379 +0000 UTC m=+172.557690970" watchObservedRunningTime="2025-08-13 00:05:56.005134801 +0000 UTC m=+175.909162392" Aug 13 00:05:56.459096 systemd-networkd[1771]: lxc_health: Gained IPv6LL Aug 13 00:05:57.616825 systemd[1]: run-containerd-runc-k8s.io-6d9ed464afe9d87b816e4a60414b22173724a51e80f0887d5ff752350db3085d-runc.EHSTDb.mount: Deactivated successfully. Aug 13 00:06:00.232199 env[1581]: time="2025-08-13T00:06:00.232163309Z" level=info msg="StopPodSandbox for \"02ddcb7a190243947389ad26948f292405dc4244ab9054d40b23d154a87115c2\"" Aug 13 00:06:00.232671 env[1581]: time="2025-08-13T00:06:00.232623630Z" level=info msg="TearDown network for sandbox \"02ddcb7a190243947389ad26948f292405dc4244ab9054d40b23d154a87115c2\" successfully" Aug 13 00:06:00.232742 env[1581]: time="2025-08-13T00:06:00.232726590Z" level=info msg="StopPodSandbox for \"02ddcb7a190243947389ad26948f292405dc4244ab9054d40b23d154a87115c2\" returns successfully" Aug 13 00:06:00.233275 env[1581]: time="2025-08-13T00:06:00.233243590Z" level=info msg="RemovePodSandbox for \"02ddcb7a190243947389ad26948f292405dc4244ab9054d40b23d154a87115c2\"" Aug 13 00:06:00.233413 env[1581]: time="2025-08-13T00:06:00.233278470Z" level=info msg="Forcibly stopping sandbox \"02ddcb7a190243947389ad26948f292405dc4244ab9054d40b23d154a87115c2\"" Aug 13 00:06:00.233413 env[1581]: time="2025-08-13T00:06:00.233341190Z" level=info msg="TearDown network for sandbox \"02ddcb7a190243947389ad26948f292405dc4244ab9054d40b23d154a87115c2\" successfully" Aug 13 00:06:00.243067 env[1581]: time="2025-08-13T00:06:00.243027117Z" level=info msg="RemovePodSandbox \"02ddcb7a190243947389ad26948f292405dc4244ab9054d40b23d154a87115c2\" returns successfully" Aug 13 00:06:00.243509 env[1581]: time="2025-08-13T00:06:00.243487797Z" level=info msg="StopPodSandbox for \"dd4c7dcb94ea4b60d3e1a1885be8be5ac1379c28fed610c23ece2d3a10da5e38\"" Aug 13 00:06:00.243668 env[1581]: time="2025-08-13T00:06:00.243631997Z" level=info msg="TearDown network for sandbox \"dd4c7dcb94ea4b60d3e1a1885be8be5ac1379c28fed610c23ece2d3a10da5e38\" successfully" Aug 13 00:06:00.243738 env[1581]: time="2025-08-13T00:06:00.243719038Z" level=info msg="StopPodSandbox for \"dd4c7dcb94ea4b60d3e1a1885be8be5ac1379c28fed610c23ece2d3a10da5e38\" returns successfully" Aug 13 00:06:00.244057 env[1581]: time="2025-08-13T00:06:00.244031118Z" level=info msg="RemovePodSandbox for \"dd4c7dcb94ea4b60d3e1a1885be8be5ac1379c28fed610c23ece2d3a10da5e38\"" Aug 13 00:06:00.244179 env[1581]: time="2025-08-13T00:06:00.244148838Z" level=info msg="Forcibly stopping sandbox \"dd4c7dcb94ea4b60d3e1a1885be8be5ac1379c28fed610c23ece2d3a10da5e38\"" Aug 13 00:06:00.244280 env[1581]: time="2025-08-13T00:06:00.244263718Z" level=info msg="TearDown network for sandbox \"dd4c7dcb94ea4b60d3e1a1885be8be5ac1379c28fed610c23ece2d3a10da5e38\" successfully" Aug 13 00:06:00.250018 env[1581]: time="2025-08-13T00:06:00.249965122Z" level=info msg="RemovePodSandbox \"dd4c7dcb94ea4b60d3e1a1885be8be5ac1379c28fed610c23ece2d3a10da5e38\" returns successfully" Aug 13 00:06:00.250584 env[1581]: time="2025-08-13T00:06:00.250359402Z" level=info msg="StopPodSandbox for \"fe1ce9b4f8f3b57898a305f411dc4093b86767db4cca211f628a257fb2ff3bb4\"" Aug 13 00:06:00.250584 env[1581]: time="2025-08-13T00:06:00.250443322Z" level=info msg="TearDown network for sandbox \"fe1ce9b4f8f3b57898a305f411dc4093b86767db4cca211f628a257fb2ff3bb4\" successfully" Aug 13 00:06:00.250584 env[1581]: time="2025-08-13T00:06:00.250470482Z" level=info msg="StopPodSandbox for \"fe1ce9b4f8f3b57898a305f411dc4093b86767db4cca211f628a257fb2ff3bb4\" returns successfully" Aug 13 00:06:00.250935 env[1581]: time="2025-08-13T00:06:00.250746483Z" level=info msg="RemovePodSandbox for \"fe1ce9b4f8f3b57898a305f411dc4093b86767db4cca211f628a257fb2ff3bb4\"" Aug 13 00:06:00.250935 env[1581]: time="2025-08-13T00:06:00.250775963Z" level=info msg="Forcibly stopping sandbox \"fe1ce9b4f8f3b57898a305f411dc4093b86767db4cca211f628a257fb2ff3bb4\"" Aug 13 00:06:00.250935 env[1581]: time="2025-08-13T00:06:00.250846443Z" level=info msg="TearDown network for sandbox \"fe1ce9b4f8f3b57898a305f411dc4093b86767db4cca211f628a257fb2ff3bb4\" successfully" Aug 13 00:06:00.258124 env[1581]: time="2025-08-13T00:06:00.258095408Z" level=info msg="RemovePodSandbox \"fe1ce9b4f8f3b57898a305f411dc4093b86767db4cca211f628a257fb2ff3bb4\" returns successfully" Aug 13 00:06:01.861954 systemd[1]: run-containerd-runc-k8s.io-6d9ed464afe9d87b816e4a60414b22173724a51e80f0887d5ff752350db3085d-runc.D6ncIN.mount: Deactivated successfully. Aug 13 00:06:02.044099 sshd[4514]: pam_unix(sshd:session): session closed for user core Aug 13 00:06:02.047192 systemd[1]: sshd@24-10.200.20.38:22-10.200.16.10:50812.service: Deactivated successfully. Aug 13 00:06:02.047933 systemd[1]: session-27.scope: Deactivated successfully. Aug 13 00:06:02.048845 systemd-logind[1562]: Session 27 logged out. Waiting for processes to exit. Aug 13 00:06:02.049932 systemd-logind[1562]: Removed session 27.