May 17 00:48:03.006322 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 17 00:48:03.006340 kernel: Linux version 5.15.182-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri May 16 23:24:21 -00 2025 May 17 00:48:03.006348 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') May 17 00:48:03.006354 kernel: printk: bootconsole [pl11] enabled May 17 00:48:03.006359 kernel: efi: EFI v2.70 by EDK II May 17 00:48:03.006365 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3ead8b98 RNG=0x3fd5f998 MEMRESERVE=0x3763cf98 May 17 00:48:03.006371 kernel: random: crng init done May 17 00:48:03.006377 kernel: ACPI: Early table checksum verification disabled May 17 00:48:03.006382 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) May 17 00:48:03.006387 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:48:03.006393 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:48:03.006398 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) May 17 00:48:03.006405 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:48:03.006410 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:48:03.006418 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:48:03.006423 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:48:03.006429 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:48:03.006436 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:48:03.006442 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) May 17 00:48:03.006448 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:48:03.006453 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 May 17 00:48:03.006459 kernel: NUMA: Failed to initialise from firmware May 17 00:48:03.006465 kernel: NUMA: Faking a node at [mem 0x0000000000000000-0x00000001bfffffff] May 17 00:48:03.006471 kernel: NUMA: NODE_DATA [mem 0x1bf7f3900-0x1bf7f8fff] May 17 00:48:03.006476 kernel: Zone ranges: May 17 00:48:03.006482 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] May 17 00:48:03.006487 kernel: DMA32 empty May 17 00:48:03.006493 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] May 17 00:48:03.006500 kernel: Movable zone start for each node May 17 00:48:03.006505 kernel: Early memory node ranges May 17 00:48:03.006511 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] May 17 00:48:03.006517 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] May 17 00:48:03.006523 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] May 17 00:48:03.006528 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] May 17 00:48:03.006534 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] May 17 00:48:03.006539 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] May 17 00:48:03.006545 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] May 17 00:48:03.006551 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] May 17 00:48:03.006556 kernel: On node 0, zone DMA: 36 pages in unavailable ranges May 17 00:48:03.006562 kernel: psci: probing for conduit method from ACPI. May 17 00:48:03.006572 kernel: psci: PSCIv1.1 detected in firmware. May 17 00:48:03.006578 kernel: psci: Using standard PSCI v0.2 function IDs May 17 00:48:03.006584 kernel: psci: MIGRATE_INFO_TYPE not supported. May 17 00:48:03.006590 kernel: psci: SMC Calling Convention v1.4 May 17 00:48:03.006596 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node -1 May 17 00:48:03.006603 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node -1 May 17 00:48:03.006609 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 May 17 00:48:03.006626 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 May 17 00:48:03.006632 kernel: pcpu-alloc: [0] 0 [0] 1 May 17 00:48:03.006638 kernel: Detected PIPT I-cache on CPU0 May 17 00:48:03.006644 kernel: CPU features: detected: GIC system register CPU interface May 17 00:48:03.006650 kernel: CPU features: detected: Hardware dirty bit management May 17 00:48:03.006656 kernel: CPU features: detected: Spectre-BHB May 17 00:48:03.006662 kernel: CPU features: kernel page table isolation forced ON by KASLR May 17 00:48:03.006668 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 17 00:48:03.006675 kernel: CPU features: detected: ARM erratum 1418040 May 17 00:48:03.006682 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) May 17 00:48:03.006688 kernel: CPU features: detected: SSBS not fully self-synchronizing May 17 00:48:03.006694 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 May 17 00:48:03.006700 kernel: Policy zone: Normal May 17 00:48:03.006707 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=2d48a3f066dcb37cd386b93b4921577cdf70daa76e7b097cf98da108968f8bb5 May 17 00:48:03.006714 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 17 00:48:03.006720 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 17 00:48:03.006726 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 17 00:48:03.006732 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 17 00:48:03.006738 kernel: software IO TLB: mapped [mem 0x000000003a550000-0x000000003e550000] (64MB) May 17 00:48:03.006745 kernel: Memory: 3986944K/4194160K available (9792K kernel code, 2094K rwdata, 7584K rodata, 36416K init, 777K bss, 207216K reserved, 0K cma-reserved) May 17 00:48:03.006752 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 17 00:48:03.006759 kernel: trace event string verifier disabled May 17 00:48:03.006765 kernel: rcu: Preemptible hierarchical RCU implementation. May 17 00:48:03.006771 kernel: rcu: RCU event tracing is enabled. May 17 00:48:03.006777 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 17 00:48:03.006784 kernel: Trampoline variant of Tasks RCU enabled. May 17 00:48:03.006790 kernel: Tracing variant of Tasks RCU enabled. May 17 00:48:03.006796 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 17 00:48:03.006802 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 17 00:48:03.006808 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 17 00:48:03.006814 kernel: GICv3: 960 SPIs implemented May 17 00:48:03.006822 kernel: GICv3: 0 Extended SPIs implemented May 17 00:48:03.006828 kernel: GICv3: Distributor has no Range Selector support May 17 00:48:03.006834 kernel: Root IRQ handler: gic_handle_irq May 17 00:48:03.006840 kernel: GICv3: 16 PPIs implemented May 17 00:48:03.006846 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 May 17 00:48:03.006852 kernel: ITS: No ITS available, not enabling LPIs May 17 00:48:03.006858 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:48:03.006864 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 17 00:48:03.006871 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 17 00:48:03.006877 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 17 00:48:03.006889 kernel: Console: colour dummy device 80x25 May 17 00:48:03.006897 kernel: printk: console [tty1] enabled May 17 00:48:03.006903 kernel: ACPI: Core revision 20210730 May 17 00:48:03.006910 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 17 00:48:03.006917 kernel: pid_max: default: 32768 minimum: 301 May 17 00:48:03.006923 kernel: LSM: Security Framework initializing May 17 00:48:03.006934 kernel: SELinux: Initializing. May 17 00:48:03.006940 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 17 00:48:03.006946 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 17 00:48:03.006953 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 May 17 00:48:03.006960 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 May 17 00:48:03.006967 kernel: rcu: Hierarchical SRCU implementation. May 17 00:48:03.006973 kernel: Remapping and enabling EFI services. May 17 00:48:03.006979 kernel: smp: Bringing up secondary CPUs ... May 17 00:48:03.006985 kernel: Detected PIPT I-cache on CPU1 May 17 00:48:03.006991 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 May 17 00:48:03.006997 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:48:03.007004 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 17 00:48:03.007013 kernel: smp: Brought up 1 node, 2 CPUs May 17 00:48:03.007020 kernel: SMP: Total of 2 processors activated. May 17 00:48:03.007027 kernel: CPU features: detected: 32-bit EL0 Support May 17 00:48:03.007034 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence May 17 00:48:03.007040 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 17 00:48:03.007046 kernel: CPU features: detected: CRC32 instructions May 17 00:48:03.007052 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 17 00:48:03.007062 kernel: CPU features: detected: LSE atomic instructions May 17 00:48:03.007068 kernel: CPU features: detected: Privileged Access Never May 17 00:48:03.007075 kernel: CPU: All CPU(s) started at EL1 May 17 00:48:03.007081 kernel: alternatives: patching kernel code May 17 00:48:03.007088 kernel: devtmpfs: initialized May 17 00:48:03.007102 kernel: KASLR enabled May 17 00:48:03.007109 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 17 00:48:03.007117 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 17 00:48:03.007124 kernel: pinctrl core: initialized pinctrl subsystem May 17 00:48:03.007130 kernel: SMBIOS 3.1.0 present. May 17 00:48:03.007141 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 May 17 00:48:03.007148 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 17 00:48:03.007154 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 17 00:48:03.007163 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 17 00:48:03.007173 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 17 00:48:03.007180 kernel: audit: initializing netlink subsys (disabled) May 17 00:48:03.007186 kernel: audit: type=2000 audit(0.087:1): state=initialized audit_enabled=0 res=1 May 17 00:48:03.007193 kernel: thermal_sys: Registered thermal governor 'step_wise' May 17 00:48:03.007199 kernel: cpuidle: using governor menu May 17 00:48:03.007206 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 17 00:48:03.007214 kernel: ASID allocator initialised with 32768 entries May 17 00:48:03.007220 kernel: ACPI: bus type PCI registered May 17 00:48:03.007227 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 17 00:48:03.007233 kernel: Serial: AMBA PL011 UART driver May 17 00:48:03.007243 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages May 17 00:48:03.007250 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages May 17 00:48:03.007257 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages May 17 00:48:03.007264 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages May 17 00:48:03.007270 kernel: cryptd: max_cpu_qlen set to 1000 May 17 00:48:03.007278 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 17 00:48:03.007284 kernel: ACPI: Added _OSI(Module Device) May 17 00:48:03.007291 kernel: ACPI: Added _OSI(Processor Device) May 17 00:48:03.007297 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 17 00:48:03.007304 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 17 00:48:03.007310 kernel: ACPI: Added _OSI(Linux-Dell-Video) May 17 00:48:03.007317 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) May 17 00:48:03.007327 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) May 17 00:48:03.007334 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 17 00:48:03.007342 kernel: ACPI: Interpreter enabled May 17 00:48:03.007349 kernel: ACPI: Using GIC for interrupt routing May 17 00:48:03.007355 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA May 17 00:48:03.007362 kernel: printk: console [ttyAMA0] enabled May 17 00:48:03.007369 kernel: printk: bootconsole [pl11] disabled May 17 00:48:03.007375 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA May 17 00:48:03.007385 kernel: iommu: Default domain type: Translated May 17 00:48:03.007392 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 17 00:48:03.007398 kernel: vgaarb: loaded May 17 00:48:03.007405 kernel: pps_core: LinuxPPS API ver. 1 registered May 17 00:48:03.007413 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 17 00:48:03.007419 kernel: PTP clock support registered May 17 00:48:03.007426 kernel: Registered efivars operations May 17 00:48:03.007432 kernel: No ACPI PMU IRQ for CPU0 May 17 00:48:03.007442 kernel: No ACPI PMU IRQ for CPU1 May 17 00:48:03.007448 kernel: clocksource: Switched to clocksource arch_sys_counter May 17 00:48:03.007455 kernel: VFS: Disk quotas dquot_6.6.0 May 17 00:48:03.007461 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 17 00:48:03.007469 kernel: pnp: PnP ACPI init May 17 00:48:03.007479 kernel: pnp: PnP ACPI: found 0 devices May 17 00:48:03.007486 kernel: NET: Registered PF_INET protocol family May 17 00:48:03.007492 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 17 00:48:03.007499 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 17 00:48:03.007509 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 17 00:48:03.007516 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 17 00:48:03.007522 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) May 17 00:48:03.007532 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 17 00:48:03.007540 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 17 00:48:03.007546 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 17 00:48:03.007553 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 17 00:48:03.007563 kernel: PCI: CLS 0 bytes, default 64 May 17 00:48:03.007570 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available May 17 00:48:03.007576 kernel: kvm [1]: HYP mode not available May 17 00:48:03.007586 kernel: Initialise system trusted keyrings May 17 00:48:03.007593 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 17 00:48:03.007599 kernel: Key type asymmetric registered May 17 00:48:03.007607 kernel: Asymmetric key parser 'x509' registered May 17 00:48:03.007623 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 17 00:48:03.007629 kernel: io scheduler mq-deadline registered May 17 00:48:03.009643 kernel: io scheduler kyber registered May 17 00:48:03.009653 kernel: io scheduler bfq registered May 17 00:48:03.009660 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 17 00:48:03.009666 kernel: thunder_xcv, ver 1.0 May 17 00:48:03.009673 kernel: thunder_bgx, ver 1.0 May 17 00:48:03.009679 kernel: nicpf, ver 1.0 May 17 00:48:03.009686 kernel: nicvf, ver 1.0 May 17 00:48:03.009813 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 17 00:48:03.009873 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-17T00:48:02 UTC (1747442882) May 17 00:48:03.009883 kernel: efifb: probing for efifb May 17 00:48:03.009890 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k May 17 00:48:03.009897 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 May 17 00:48:03.009903 kernel: efifb: scrolling: redraw May 17 00:48:03.009910 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 17 00:48:03.009918 kernel: Console: switching to colour frame buffer device 128x48 May 17 00:48:03.009925 kernel: fb0: EFI VGA frame buffer device May 17 00:48:03.009932 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... May 17 00:48:03.009938 kernel: hid: raw HID events driver (C) Jiri Kosina May 17 00:48:03.009945 kernel: NET: Registered PF_INET6 protocol family May 17 00:48:03.009951 kernel: Segment Routing with IPv6 May 17 00:48:03.009958 kernel: In-situ OAM (IOAM) with IPv6 May 17 00:48:03.009964 kernel: NET: Registered PF_PACKET protocol family May 17 00:48:03.009970 kernel: Key type dns_resolver registered May 17 00:48:03.009977 kernel: registered taskstats version 1 May 17 00:48:03.009985 kernel: Loading compiled-in X.509 certificates May 17 00:48:03.009992 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.182-flatcar: 2fa973ae674d09a62938b8c6a2b9446b5340adb7' May 17 00:48:03.009999 kernel: Key type .fscrypt registered May 17 00:48:03.010005 kernel: Key type fscrypt-provisioning registered May 17 00:48:03.010012 kernel: ima: No TPM chip found, activating TPM-bypass! May 17 00:48:03.010019 kernel: ima: Allocated hash algorithm: sha1 May 17 00:48:03.010025 kernel: ima: No architecture policies found May 17 00:48:03.010032 kernel: clk: Disabling unused clocks May 17 00:48:03.010040 kernel: Freeing unused kernel memory: 36416K May 17 00:48:03.010047 kernel: Run /init as init process May 17 00:48:03.010053 kernel: with arguments: May 17 00:48:03.010059 kernel: /init May 17 00:48:03.010066 kernel: with environment: May 17 00:48:03.010073 kernel: HOME=/ May 17 00:48:03.010079 kernel: TERM=linux May 17 00:48:03.010085 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 17 00:48:03.010094 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 17 00:48:03.010104 systemd[1]: Detected virtualization microsoft. May 17 00:48:03.010112 systemd[1]: Detected architecture arm64. May 17 00:48:03.010119 systemd[1]: Running in initrd. May 17 00:48:03.010126 systemd[1]: No hostname configured, using default hostname. May 17 00:48:03.010133 systemd[1]: Hostname set to . May 17 00:48:03.010140 systemd[1]: Initializing machine ID from random generator. May 17 00:48:03.010147 systemd[1]: Queued start job for default target initrd.target. May 17 00:48:03.010155 systemd[1]: Started systemd-ask-password-console.path. May 17 00:48:03.010162 systemd[1]: Reached target cryptsetup.target. May 17 00:48:03.010169 systemd[1]: Reached target paths.target. May 17 00:48:03.010176 systemd[1]: Reached target slices.target. May 17 00:48:03.010183 systemd[1]: Reached target swap.target. May 17 00:48:03.010190 systemd[1]: Reached target timers.target. May 17 00:48:03.010197 systemd[1]: Listening on iscsid.socket. May 17 00:48:03.010204 systemd[1]: Listening on iscsiuio.socket. May 17 00:48:03.010212 systemd[1]: Listening on systemd-journald-audit.socket. May 17 00:48:03.010219 systemd[1]: Listening on systemd-journald-dev-log.socket. May 17 00:48:03.010226 systemd[1]: Listening on systemd-journald.socket. May 17 00:48:03.010233 systemd[1]: Listening on systemd-networkd.socket. May 17 00:48:03.010240 systemd[1]: Listening on systemd-udevd-control.socket. May 17 00:48:03.010247 systemd[1]: Listening on systemd-udevd-kernel.socket. May 17 00:48:03.010254 systemd[1]: Reached target sockets.target. May 17 00:48:03.010261 systemd[1]: Starting kmod-static-nodes.service... May 17 00:48:03.010268 systemd[1]: Finished network-cleanup.service. May 17 00:48:03.010276 systemd[1]: Starting systemd-fsck-usr.service... May 17 00:48:03.010283 systemd[1]: Starting systemd-journald.service... May 17 00:48:03.010290 systemd[1]: Starting systemd-modules-load.service... May 17 00:48:03.010297 systemd[1]: Starting systemd-resolved.service... May 17 00:48:03.010304 systemd[1]: Starting systemd-vconsole-setup.service... May 17 00:48:03.010314 systemd-journald[276]: Journal started May 17 00:48:03.010351 systemd-journald[276]: Runtime Journal (/run/log/journal/e6726798d87f4c8c8ef51550b894ae59) is 8.0M, max 78.5M, 70.5M free. May 17 00:48:02.989581 systemd-modules-load[277]: Inserted module 'overlay' May 17 00:48:03.029559 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 17 00:48:03.037887 systemd-resolved[278]: Positive Trust Anchors: May 17 00:48:03.074308 systemd[1]: Started systemd-journald.service. May 17 00:48:03.074329 kernel: audit: type=1130 audit(1747442883.043:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:03.074340 kernel: Bridge firewalling registered May 17 00:48:03.043000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:03.038043 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:48:03.134017 kernel: audit: type=1130 audit(1747442883.069:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:03.134059 kernel: SCSI subsystem initialized May 17 00:48:03.134076 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 17 00:48:03.134092 kernel: audit: type=1130 audit(1747442883.099:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:03.134101 kernel: device-mapper: uevent: version 1.0.3 May 17 00:48:03.069000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:03.099000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:03.038071 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 17 00:48:03.231738 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com May 17 00:48:03.231759 kernel: audit: type=1130 audit(1747442883.133:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:03.231770 kernel: audit: type=1130 audit(1747442883.173:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:03.231787 kernel: audit: type=1130 audit(1747442883.213:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:03.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:03.173000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:03.213000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:03.044208 systemd-resolved[278]: Defaulting to hostname 'linux'. May 17 00:48:03.059239 systemd[1]: Started systemd-resolved.service. May 17 00:48:03.062345 systemd-modules-load[277]: Inserted module 'br_netfilter' May 17 00:48:03.070116 systemd[1]: Finished kmod-static-nodes.service. May 17 00:48:03.099898 systemd[1]: Finished systemd-fsck-usr.service. May 17 00:48:03.134735 systemd[1]: Finished systemd-vconsole-setup.service. May 17 00:48:03.158743 systemd-modules-load[277]: Inserted module 'dm_multipath' May 17 00:48:03.173934 systemd[1]: Finished systemd-modules-load.service. May 17 00:48:03.214303 systemd[1]: Reached target nss-lookup.target. May 17 00:48:03.236954 systemd[1]: Starting dracut-cmdline-ask.service... May 17 00:48:03.304000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:03.245332 systemd[1]: Starting systemd-sysctl.service... May 17 00:48:03.351218 kernel: audit: type=1130 audit(1747442883.304:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:03.351241 kernel: audit: type=1130 audit(1747442883.327:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:03.327000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:03.273872 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 17 00:48:03.292876 systemd[1]: Finished dracut-cmdline-ask.service. May 17 00:48:03.379654 kernel: audit: type=1130 audit(1747442883.351:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:03.351000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:03.304691 systemd[1]: Finished systemd-sysctl.service. May 17 00:48:03.327748 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 17 00:48:03.352261 systemd[1]: Starting dracut-cmdline.service... May 17 00:48:03.395865 dracut-cmdline[299]: dracut-dracut-053 May 17 00:48:03.401125 dracut-cmdline[299]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=2d48a3f066dcb37cd386b93b4921577cdf70daa76e7b097cf98da108968f8bb5 May 17 00:48:03.488638 kernel: Loading iSCSI transport class v2.0-870. May 17 00:48:03.503661 kernel: iscsi: registered transport (tcp) May 17 00:48:03.525134 kernel: iscsi: registered transport (qla4xxx) May 17 00:48:03.525189 kernel: QLogic iSCSI HBA Driver May 17 00:48:03.560283 systemd[1]: Finished dracut-cmdline.service. May 17 00:48:03.568000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:03.570285 systemd[1]: Starting dracut-pre-udev.service... May 17 00:48:03.619638 kernel: raid6: neonx8 gen() 13819 MB/s May 17 00:48:03.640636 kernel: raid6: neonx8 xor() 10831 MB/s May 17 00:48:03.660626 kernel: raid6: neonx4 gen() 13557 MB/s May 17 00:48:03.681649 kernel: raid6: neonx4 xor() 11331 MB/s May 17 00:48:03.702627 kernel: raid6: neonx2 gen() 12931 MB/s May 17 00:48:03.722632 kernel: raid6: neonx2 xor() 10259 MB/s May 17 00:48:03.742624 kernel: raid6: neonx1 gen() 10535 MB/s May 17 00:48:03.763630 kernel: raid6: neonx1 xor() 8803 MB/s May 17 00:48:03.783628 kernel: raid6: int64x8 gen() 6269 MB/s May 17 00:48:03.803624 kernel: raid6: int64x8 xor() 3547 MB/s May 17 00:48:03.824629 kernel: raid6: int64x4 gen() 7211 MB/s May 17 00:48:03.844629 kernel: raid6: int64x4 xor() 3848 MB/s May 17 00:48:03.864624 kernel: raid6: int64x2 gen() 6150 MB/s May 17 00:48:03.885625 kernel: raid6: int64x2 xor() 3322 MB/s May 17 00:48:03.905624 kernel: raid6: int64x1 gen() 5049 MB/s May 17 00:48:03.929842 kernel: raid6: int64x1 xor() 2647 MB/s May 17 00:48:03.929860 kernel: raid6: using algorithm neonx8 gen() 13819 MB/s May 17 00:48:03.929876 kernel: raid6: .... xor() 10831 MB/s, rmw enabled May 17 00:48:03.934965 kernel: raid6: using neon recovery algorithm May 17 00:48:03.951636 kernel: xor: measuring software checksum speed May 17 00:48:03.959092 kernel: 8regs : 16335 MB/sec May 17 00:48:03.959110 kernel: 32regs : 20327 MB/sec May 17 00:48:03.962787 kernel: arm64_neon : 27860 MB/sec May 17 00:48:03.962796 kernel: xor: using function: arm64_neon (27860 MB/sec) May 17 00:48:04.022639 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no May 17 00:48:04.033013 systemd[1]: Finished dracut-pre-udev.service. May 17 00:48:04.037000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:04.040000 audit: BPF prog-id=7 op=LOAD May 17 00:48:04.040000 audit: BPF prog-id=8 op=LOAD May 17 00:48:04.041887 systemd[1]: Starting systemd-udevd.service... May 17 00:48:04.059981 systemd-udevd[476]: Using default interface naming scheme 'v252'. May 17 00:48:04.065743 systemd[1]: Started systemd-udevd.service. May 17 00:48:04.070000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:04.075826 systemd[1]: Starting dracut-pre-trigger.service... May 17 00:48:04.089786 dracut-pre-trigger[497]: rd.md=0: removing MD RAID activation May 17 00:48:04.118009 systemd[1]: Finished dracut-pre-trigger.service. May 17 00:48:04.122000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:04.123555 systemd[1]: Starting systemd-udev-trigger.service... May 17 00:48:04.158420 systemd[1]: Finished systemd-udev-trigger.service. May 17 00:48:04.169000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:04.201642 kernel: hv_vmbus: Vmbus version:5.3 May 17 00:48:04.209641 kernel: hv_vmbus: registering driver hid_hyperv May 17 00:48:04.236373 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 May 17 00:48:04.236415 kernel: hv_vmbus: registering driver hyperv_keyboard May 17 00:48:04.236426 kernel: hv_vmbus: registering driver hv_netvsc May 17 00:48:04.236434 kernel: hid-generic 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on May 17 00:48:04.239627 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 May 17 00:48:04.264059 kernel: hv_vmbus: registering driver hv_storvsc May 17 00:48:04.264094 kernel: scsi host1: storvsc_host_t May 17 00:48:04.270132 kernel: scsi host0: storvsc_host_t May 17 00:48:04.276639 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 May 17 00:48:04.283481 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 May 17 00:48:04.307465 kernel: sr 0:0:0:2: [sr0] scsi-1 drive May 17 00:48:04.322891 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 17 00:48:04.322905 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) May 17 00:48:04.332990 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks May 17 00:48:04.333088 kernel: sd 0:0:0:0: [sda] Write Protect is off May 17 00:48:04.333167 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 May 17 00:48:04.333244 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA May 17 00:48:04.333321 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 May 17 00:48:04.333406 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:48:04.333416 kernel: sd 0:0:0:0: [sda] Attached SCSI disk May 17 00:48:04.333491 kernel: hv_netvsc 000d3afc-96de-000d-3afc-96de000d3afc eth0: VF slot 1 added May 17 00:48:04.348649 kernel: hv_vmbus: registering driver hv_pci May 17 00:48:04.363324 kernel: hv_pci fb640ebb-edf1-44ba-83c7-106446770fcf: PCI VMBus probing: Using version 0x10004 May 17 00:48:04.460751 kernel: hv_pci fb640ebb-edf1-44ba-83c7-106446770fcf: PCI host bridge to bus edf1:00 May 17 00:48:04.460853 kernel: pci_bus edf1:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] May 17 00:48:04.460950 kernel: pci_bus edf1:00: No busn resource found for root bus, will use [bus 00-ff] May 17 00:48:04.461024 kernel: pci edf1:00:02.0: [15b3:1018] type 00 class 0x020000 May 17 00:48:04.461128 kernel: pci edf1:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] May 17 00:48:04.461211 kernel: pci edf1:00:02.0: enabling Extended Tags May 17 00:48:04.461294 kernel: pci edf1:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at edf1:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) May 17 00:48:04.461371 kernel: pci_bus edf1:00: busn_res: [bus 00-ff] end is updated to 00 May 17 00:48:04.461444 kernel: pci edf1:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] May 17 00:48:04.498638 kernel: mlx5_core edf1:00:02.0: firmware version: 16.30.1284 May 17 00:48:04.721096 kernel: mlx5_core edf1:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0) May 17 00:48:04.721215 kernel: hv_netvsc 000d3afc-96de-000d-3afc-96de000d3afc eth0: VF registering: eth1 May 17 00:48:04.721298 kernel: mlx5_core edf1:00:02.0 eth1: joined to eth0 May 17 00:48:04.729631 kernel: mlx5_core edf1:00:02.0 enP60913s1: renamed from eth1 May 17 00:48:04.779633 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (532) May 17 00:48:04.791908 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 17 00:48:04.806872 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. May 17 00:48:04.953657 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. May 17 00:48:05.008981 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. May 17 00:48:05.014776 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. May 17 00:48:05.026762 systemd[1]: Starting disk-uuid.service... May 17 00:48:05.050666 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:48:05.058654 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:48:06.065634 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:48:06.067806 disk-uuid[601]: The operation has completed successfully. May 17 00:48:06.121138 systemd[1]: disk-uuid.service: Deactivated successfully. May 17 00:48:06.125000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:06.125000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:06.121236 systemd[1]: Finished disk-uuid.service. May 17 00:48:06.130831 systemd[1]: Starting verity-setup.service... May 17 00:48:06.174032 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 17 00:48:06.371951 systemd[1]: Found device dev-mapper-usr.device. May 17 00:48:06.378391 systemd[1]: Mounting sysusr-usr.mount... May 17 00:48:06.391160 systemd[1]: Finished verity-setup.service. May 17 00:48:06.395000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:06.447636 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. May 17 00:48:06.447763 systemd[1]: Mounted sysusr-usr.mount. May 17 00:48:06.451910 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. May 17 00:48:06.452643 systemd[1]: Starting ignition-setup.service... May 17 00:48:06.459700 systemd[1]: Starting parse-ip-for-networkd.service... May 17 00:48:06.497300 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 17 00:48:06.497335 kernel: BTRFS info (device sda6): using free space tree May 17 00:48:06.503830 kernel: BTRFS info (device sda6): has skinny extents May 17 00:48:06.554455 systemd[1]: Finished parse-ip-for-networkd.service. May 17 00:48:06.559000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:06.563000 audit: BPF prog-id=9 op=LOAD May 17 00:48:06.564336 systemd[1]: Starting systemd-networkd.service... May 17 00:48:06.589229 systemd-networkd[844]: lo: Link UP May 17 00:48:06.589244 systemd-networkd[844]: lo: Gained carrier May 17 00:48:06.597000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:06.589657 systemd-networkd[844]: Enumeration completed May 17 00:48:06.592751 systemd[1]: Started systemd-networkd.service. May 17 00:48:06.593278 systemd-networkd[844]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:48:06.598008 systemd[1]: Reached target network.target. May 17 00:48:06.632000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:06.608398 systemd[1]: Starting iscsiuio.service... May 17 00:48:06.621489 systemd[1]: mnt-oem.mount: Deactivated successfully. May 17 00:48:06.649124 iscsid[854]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi May 17 00:48:06.649124 iscsid[854]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log May 17 00:48:06.649124 iscsid[854]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. May 17 00:48:06.649124 iscsid[854]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. May 17 00:48:06.649124 iscsid[854]: If using hardware iscsi like qla4xxx this message can be ignored. May 17 00:48:06.649124 iscsid[854]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi May 17 00:48:06.649124 iscsid[854]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf May 17 00:48:06.660000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:06.723000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:06.621860 systemd[1]: Started iscsiuio.service. May 17 00:48:06.633340 systemd[1]: Starting iscsid.service... May 17 00:48:06.654901 systemd[1]: Started iscsid.service. May 17 00:48:06.661609 systemd[1]: Starting dracut-initqueue.service... May 17 00:48:06.712239 systemd[1]: Finished dracut-initqueue.service. May 17 00:48:06.724265 systemd[1]: Reached target remote-fs-pre.target. May 17 00:48:06.735886 systemd[1]: Reached target remote-cryptsetup.target. May 17 00:48:06.744807 systemd[1]: Reached target remote-fs.target. May 17 00:48:06.788000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:06.753828 systemd[1]: Starting dracut-pre-mount.service... May 17 00:48:06.779990 systemd[1]: Finished dracut-pre-mount.service. May 17 00:48:06.808402 kernel: mlx5_core edf1:00:02.0 enP60913s1: Link up May 17 00:48:06.814691 systemd[1]: Finished ignition-setup.service. May 17 00:48:06.819000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:06.820313 systemd[1]: Starting ignition-fetch-offline.service... May 17 00:48:06.846411 kernel: hv_netvsc 000d3afc-96de-000d-3afc-96de000d3afc eth0: Data path switched to VF: enP60913s1 May 17 00:48:06.847655 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 17 00:48:06.846860 systemd-networkd[844]: enP60913s1: Link UP May 17 00:48:06.846958 systemd-networkd[844]: eth0: Link UP May 17 00:48:06.847076 systemd-networkd[844]: eth0: Gained carrier May 17 00:48:06.855908 systemd-networkd[844]: enP60913s1: Gained carrier May 17 00:48:06.873679 systemd-networkd[844]: eth0: DHCPv4 address 10.200.20.19/24, gateway 10.200.20.1 acquired from 168.63.129.16 May 17 00:48:08.289734 systemd-networkd[844]: eth0: Gained IPv6LL May 17 00:48:09.749849 ignition[869]: Ignition 2.14.0 May 17 00:48:09.752962 ignition[869]: Stage: fetch-offline May 17 00:48:09.753043 ignition[869]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:48:09.753072 ignition[869]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 May 17 00:48:09.786483 ignition[869]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 17 00:48:09.786668 ignition[869]: parsed url from cmdline: "" May 17 00:48:09.786672 ignition[869]: no config URL provided May 17 00:48:09.786677 ignition[869]: reading system config file "/usr/lib/ignition/user.ign" May 17 00:48:09.801000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:09.793862 systemd[1]: Finished ignition-fetch-offline.service. May 17 00:48:09.832878 kernel: kauditd_printk_skb: 18 callbacks suppressed May 17 00:48:09.832900 kernel: audit: type=1130 audit(1747442889.801:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:09.786686 ignition[869]: no config at "/usr/lib/ignition/user.ign" May 17 00:48:09.811746 systemd[1]: Starting ignition-fetch.service... May 17 00:48:09.786691 ignition[869]: failed to fetch config: resource requires networking May 17 00:48:09.786976 ignition[869]: Ignition finished successfully May 17 00:48:09.818428 ignition[875]: Ignition 2.14.0 May 17 00:48:09.818434 ignition[875]: Stage: fetch May 17 00:48:09.818531 ignition[875]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:48:09.818551 ignition[875]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 May 17 00:48:09.821037 ignition[875]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 17 00:48:09.821148 ignition[875]: parsed url from cmdline: "" May 17 00:48:09.821151 ignition[875]: no config URL provided May 17 00:48:09.821155 ignition[875]: reading system config file "/usr/lib/ignition/user.ign" May 17 00:48:09.821165 ignition[875]: no config at "/usr/lib/ignition/user.ign" May 17 00:48:09.821195 ignition[875]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 May 17 00:48:09.927124 ignition[875]: GET result: OK May 17 00:48:09.927201 ignition[875]: config has been read from IMDS userdata May 17 00:48:09.930280 unknown[875]: fetched base config from "system" May 17 00:48:09.964669 kernel: audit: type=1130 audit(1747442889.942:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:09.942000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:09.927241 ignition[875]: parsing config with SHA512: 3f84b48ca5fe4a6fea4368ea3a20375b342df489466a7a2422b59b3e75b031f1e6983c4269e318f07bfb225179ec27f8c53029bae6335f0925bbabcfe7ca84c4 May 17 00:48:09.930287 unknown[875]: fetched base config from "system" May 17 00:48:09.930811 ignition[875]: fetch: fetch complete May 17 00:48:09.930292 unknown[875]: fetched user config from "azure" May 17 00:48:09.930816 ignition[875]: fetch: fetch passed May 17 00:48:09.981000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:09.999634 kernel: audit: type=1130 audit(1747442889.981:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:09.934929 systemd[1]: Finished ignition-fetch.service. May 17 00:48:09.930854 ignition[875]: Ignition finished successfully May 17 00:48:09.943351 systemd[1]: Starting ignition-kargs.service... May 17 00:48:09.970948 ignition[881]: Ignition 2.14.0 May 17 00:48:09.977233 systemd[1]: Finished ignition-kargs.service. May 17 00:48:09.970954 ignition[881]: Stage: kargs May 17 00:48:09.982181 systemd[1]: Starting ignition-disks.service... May 17 00:48:09.971051 ignition[881]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:48:10.033000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:10.027127 systemd[1]: Finished ignition-disks.service. May 17 00:48:10.063228 kernel: audit: type=1130 audit(1747442890.033:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:09.971071 ignition[881]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 May 17 00:48:10.033548 systemd[1]: Reached target initrd-root-device.target. May 17 00:48:09.973570 ignition[881]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 17 00:48:10.059218 systemd[1]: Reached target local-fs-pre.target. May 17 00:48:09.976199 ignition[881]: kargs: kargs passed May 17 00:48:10.067959 systemd[1]: Reached target local-fs.target. May 17 00:48:09.976234 ignition[881]: Ignition finished successfully May 17 00:48:10.076995 systemd[1]: Reached target sysinit.target. May 17 00:48:09.990145 ignition[887]: Ignition 2.14.0 May 17 00:48:10.087174 systemd[1]: Reached target basic.target. May 17 00:48:09.990150 ignition[887]: Stage: disks May 17 00:48:10.096520 systemd[1]: Starting systemd-fsck-root.service... May 17 00:48:09.990238 ignition[887]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:48:09.990255 ignition[887]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 May 17 00:48:09.992758 ignition[887]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 17 00:48:10.026216 ignition[887]: disks: disks passed May 17 00:48:10.026276 ignition[887]: Ignition finished successfully May 17 00:48:10.158640 systemd-fsck[895]: ROOT: clean, 619/7326000 files, 481078/7359488 blocks May 17 00:48:10.169950 systemd[1]: Finished systemd-fsck-root.service. May 17 00:48:10.174000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:10.179303 systemd[1]: Mounting sysroot.mount... May 17 00:48:10.203736 kernel: audit: type=1130 audit(1747442890.174:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:10.217710 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. May 17 00:48:10.214091 systemd[1]: Mounted sysroot.mount. May 17 00:48:10.218766 systemd[1]: Reached target initrd-root-fs.target. May 17 00:48:10.261688 systemd[1]: Mounting sysroot-usr.mount... May 17 00:48:10.266277 systemd[1]: Starting flatcar-metadata-hostname.service... May 17 00:48:10.273568 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 17 00:48:10.273597 systemd[1]: Reached target ignition-diskful.target. May 17 00:48:10.279463 systemd[1]: Mounted sysroot-usr.mount. May 17 00:48:10.334743 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 17 00:48:10.339930 systemd[1]: Starting initrd-setup-root.service... May 17 00:48:10.367422 initrd-setup-root[911]: cut: /sysroot/etc/passwd: No such file or directory May 17 00:48:10.383801 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (906) May 17 00:48:10.383824 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 17 00:48:10.383833 kernel: BTRFS info (device sda6): using free space tree May 17 00:48:10.383842 kernel: BTRFS info (device sda6): has skinny extents May 17 00:48:10.385371 initrd-setup-root[935]: cut: /sysroot/etc/group: No such file or directory May 17 00:48:10.395707 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 17 00:48:10.408147 initrd-setup-root[945]: cut: /sysroot/etc/shadow: No such file or directory May 17 00:48:10.416867 initrd-setup-root[953]: cut: /sysroot/etc/gshadow: No such file or directory May 17 00:48:10.943503 systemd[1]: Finished initrd-setup-root.service. May 17 00:48:10.947000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:10.967664 systemd[1]: Starting ignition-mount.service... May 17 00:48:10.978735 kernel: audit: type=1130 audit(1747442890.947:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:10.976745 systemd[1]: Starting sysroot-boot.service... May 17 00:48:10.983202 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. May 17 00:48:10.983314 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. May 17 00:48:11.008156 ignition[972]: INFO : Ignition 2.14.0 May 17 00:48:11.008156 ignition[972]: INFO : Stage: mount May 17 00:48:11.016160 ignition[972]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:48:11.016160 ignition[972]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 May 17 00:48:11.016160 ignition[972]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 17 00:48:11.016160 ignition[972]: INFO : mount: mount passed May 17 00:48:11.016160 ignition[972]: INFO : Ignition finished successfully May 17 00:48:11.097046 kernel: audit: type=1130 audit(1747442891.027:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:11.097071 kernel: audit: type=1130 audit(1747442891.065:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:11.027000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:11.065000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:11.016313 systemd[1]: Finished ignition-mount.service. May 17 00:48:11.044162 systemd[1]: Finished sysroot-boot.service. May 17 00:48:11.752113 coreos-metadata[905]: May 17 00:48:11.752 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 May 17 00:48:11.760587 coreos-metadata[905]: May 17 00:48:11.755 INFO Fetch successful May 17 00:48:11.788618 coreos-metadata[905]: May 17 00:48:11.788 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 May 17 00:48:11.801347 coreos-metadata[905]: May 17 00:48:11.801 INFO Fetch successful May 17 00:48:11.823303 coreos-metadata[905]: May 17 00:48:11.823 INFO wrote hostname ci-3510.3.7-n-5e40c0776b to /sysroot/etc/hostname May 17 00:48:11.831639 systemd[1]: Finished flatcar-metadata-hostname.service. May 17 00:48:11.837000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:11.838359 systemd[1]: Starting ignition-files.service... May 17 00:48:11.864516 kernel: audit: type=1130 audit(1747442891.837:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:11.863572 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 17 00:48:11.887460 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (984) May 17 00:48:11.887491 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 17 00:48:11.887501 kernel: BTRFS info (device sda6): using free space tree May 17 00:48:11.896749 kernel: BTRFS info (device sda6): has skinny extents May 17 00:48:11.901084 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 17 00:48:11.914797 ignition[1003]: INFO : Ignition 2.14.0 May 17 00:48:11.914797 ignition[1003]: INFO : Stage: files May 17 00:48:11.924305 ignition[1003]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:48:11.924305 ignition[1003]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 May 17 00:48:11.924305 ignition[1003]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 17 00:48:11.924305 ignition[1003]: DEBUG : files: compiled without relabeling support, skipping May 17 00:48:11.924305 ignition[1003]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 17 00:48:11.924305 ignition[1003]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 17 00:48:11.976382 ignition[1003]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 17 00:48:11.983863 ignition[1003]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 17 00:48:11.992218 ignition[1003]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 17 00:48:11.992218 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" May 17 00:48:11.992218 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 May 17 00:48:11.991014 unknown[1003]: wrote ssh authorized keys file for user: core May 17 00:48:12.079453 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 17 00:48:12.188388 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" May 17 00:48:12.198977 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 17 00:48:12.198977 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 May 17 00:48:12.682555 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 17 00:48:12.752227 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 17 00:48:12.762821 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 17 00:48:12.762821 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 17 00:48:12.762821 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 17 00:48:12.762821 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 17 00:48:12.762821 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:48:12.762821 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:48:12.762821 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:48:12.762821 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:48:12.762821 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:48:12.762821 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:48:12.762821 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" May 17 00:48:12.762821 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" May 17 00:48:12.762821 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/system/waagent.service" May 17 00:48:12.762821 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(b): oem config not found in "/usr/share/oem", looking on oem partition May 17 00:48:12.762821 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4119873714" May 17 00:48:12.919574 ignition[1003]: CRITICAL : files: createFilesystemsFiles: createFiles: op(b): op(c): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4119873714": device or resource busy May 17 00:48:12.919574 ignition[1003]: ERROR : files: createFilesystemsFiles: createFiles: op(b): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem4119873714", trying btrfs: device or resource busy May 17 00:48:12.919574 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4119873714" May 17 00:48:12.919574 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4119873714" May 17 00:48:12.919574 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [started] unmounting "/mnt/oem4119873714" May 17 00:48:12.919574 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [finished] unmounting "/mnt/oem4119873714" May 17 00:48:12.919574 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" May 17 00:48:12.919574 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" May 17 00:48:12.919574 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(f): oem config not found in "/usr/share/oem", looking on oem partition May 17 00:48:12.919574 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(10): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem533918511" May 17 00:48:12.919574 ignition[1003]: CRITICAL : files: createFilesystemsFiles: createFiles: op(f): op(10): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem533918511": device or resource busy May 17 00:48:12.919574 ignition[1003]: ERROR : files: createFilesystemsFiles: createFiles: op(f): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem533918511", trying btrfs: device or resource busy May 17 00:48:12.919574 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(11): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem533918511" May 17 00:48:12.919574 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(11): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem533918511" May 17 00:48:12.765245 systemd[1]: mnt-oem4119873714.mount: Deactivated successfully. May 17 00:48:13.081864 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(12): [started] unmounting "/mnt/oem533918511" May 17 00:48:13.081864 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(12): [finished] unmounting "/mnt/oem533918511" May 17 00:48:13.081864 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" May 17 00:48:13.081864 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(13): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" May 17 00:48:13.081864 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(13): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 May 17 00:48:12.795690 systemd[1]: mnt-oem533918511.mount: Deactivated successfully. May 17 00:48:13.617334 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(13): GET result: OK May 17 00:48:13.850719 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(13): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" May 17 00:48:13.850719 ignition[1003]: INFO : files: op(14): [started] processing unit "nvidia.service" May 17 00:48:13.850719 ignition[1003]: INFO : files: op(14): [finished] processing unit "nvidia.service" May 17 00:48:13.850719 ignition[1003]: INFO : files: op(15): [started] processing unit "waagent.service" May 17 00:48:13.850719 ignition[1003]: INFO : files: op(15): [finished] processing unit "waagent.service" May 17 00:48:13.850719 ignition[1003]: INFO : files: op(16): [started] processing unit "prepare-helm.service" May 17 00:48:13.934351 kernel: audit: type=1130 audit(1747442893.874:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:13.874000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:13.921000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:13.921000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:13.934471 ignition[1003]: INFO : files: op(16): op(17): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:48:13.934471 ignition[1003]: INFO : files: op(16): op(17): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:48:13.934471 ignition[1003]: INFO : files: op(16): [finished] processing unit "prepare-helm.service" May 17 00:48:13.934471 ignition[1003]: INFO : files: op(18): [started] setting preset to enabled for "nvidia.service" May 17 00:48:13.934471 ignition[1003]: INFO : files: op(18): [finished] setting preset to enabled for "nvidia.service" May 17 00:48:13.934471 ignition[1003]: INFO : files: op(19): [started] setting preset to enabled for "waagent.service" May 17 00:48:13.934471 ignition[1003]: INFO : files: op(19): [finished] setting preset to enabled for "waagent.service" May 17 00:48:13.934471 ignition[1003]: INFO : files: op(1a): [started] setting preset to enabled for "prepare-helm.service" May 17 00:48:13.934471 ignition[1003]: INFO : files: op(1a): [finished] setting preset to enabled for "prepare-helm.service" May 17 00:48:13.934471 ignition[1003]: INFO : files: createResultFile: createFiles: op(1b): [started] writing file "/sysroot/etc/.ignition-result.json" May 17 00:48:13.934471 ignition[1003]: INFO : files: createResultFile: createFiles: op(1b): [finished] writing file "/sysroot/etc/.ignition-result.json" May 17 00:48:13.934471 ignition[1003]: INFO : files: files passed May 17 00:48:13.934471 ignition[1003]: INFO : Ignition finished successfully May 17 00:48:13.934000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:13.991000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:13.991000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:13.865702 systemd[1]: Finished ignition-files.service. May 17 00:48:13.875329 systemd[1]: Starting initrd-setup-root-after-ignition.service... May 17 00:48:14.073000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:14.078509 initrd-setup-root-after-ignition[1028]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:48:13.899769 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). May 17 00:48:13.900484 systemd[1]: Starting ignition-quench.service... May 17 00:48:13.912669 systemd[1]: ignition-quench.service: Deactivated successfully. May 17 00:48:13.912778 systemd[1]: Finished ignition-quench.service. May 17 00:48:13.929055 systemd[1]: Finished initrd-setup-root-after-ignition.service. May 17 00:48:13.934647 systemd[1]: Reached target ignition-complete.target. May 17 00:48:14.141000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:13.950270 systemd[1]: Starting initrd-parse-etc.service... May 17 00:48:13.980877 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 17 00:48:13.980980 systemd[1]: Finished initrd-parse-etc.service. May 17 00:48:13.991441 systemd[1]: Reached target initrd-fs.target. May 17 00:48:14.002830 systemd[1]: Reached target initrd.target. May 17 00:48:14.014179 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. May 17 00:48:14.014893 systemd[1]: Starting dracut-pre-pivot.service... May 17 00:48:14.065126 systemd[1]: Finished dracut-pre-pivot.service. May 17 00:48:14.074257 systemd[1]: Starting initrd-cleanup.service... May 17 00:48:14.106464 systemd[1]: Stopped target nss-lookup.target. May 17 00:48:14.114208 systemd[1]: Stopped target remote-cryptsetup.target. May 17 00:48:14.124111 systemd[1]: Stopped target timers.target. May 17 00:48:14.246000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:14.132060 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 17 00:48:14.132209 systemd[1]: Stopped dracut-pre-pivot.service. May 17 00:48:14.263000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:14.141678 systemd[1]: Stopped target initrd.target. May 17 00:48:14.272000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:14.150433 systemd[1]: Stopped target basic.target. May 17 00:48:14.281000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:14.158952 systemd[1]: Stopped target ignition-complete.target. May 17 00:48:14.290000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:14.168958 systemd[1]: Stopped target ignition-diskful.target. May 17 00:48:14.316000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:14.177635 systemd[1]: Stopped target initrd-root-device.target. May 17 00:48:14.186266 systemd[1]: Stopped target remote-fs.target. May 17 00:48:14.326000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:14.336840 ignition[1041]: INFO : Ignition 2.14.0 May 17 00:48:14.336840 ignition[1041]: INFO : Stage: umount May 17 00:48:14.336840 ignition[1041]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:48:14.336840 ignition[1041]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 May 17 00:48:14.336840 ignition[1041]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 17 00:48:14.336840 ignition[1041]: INFO : umount: umount passed May 17 00:48:14.336840 ignition[1041]: INFO : Ignition finished successfully May 17 00:48:14.345000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:14.345000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:14.353000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:14.365000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:14.369000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:14.387000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:14.407000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:14.194249 systemd[1]: Stopped target remote-fs-pre.target. May 17 00:48:14.205449 systemd[1]: Stopped target sysinit.target. May 17 00:48:14.213782 systemd[1]: Stopped target local-fs.target. May 17 00:48:14.221754 systemd[1]: Stopped target local-fs-pre.target. May 17 00:48:14.230166 systemd[1]: Stopped target swap.target. May 17 00:48:14.237943 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 17 00:48:14.238107 systemd[1]: Stopped dracut-pre-mount.service. May 17 00:48:14.246747 systemd[1]: Stopped target cryptsetup.target. May 17 00:48:14.471000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:14.254420 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 17 00:48:14.254576 systemd[1]: Stopped dracut-initqueue.service. May 17 00:48:14.263691 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 17 00:48:14.263829 systemd[1]: Stopped initrd-setup-root-after-ignition.service. May 17 00:48:14.505000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:14.273156 systemd[1]: ignition-files.service: Deactivated successfully. May 17 00:48:14.516000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:14.521000 audit: BPF prog-id=6 op=UNLOAD May 17 00:48:14.273282 systemd[1]: Stopped ignition-files.service. May 17 00:48:14.281652 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 17 00:48:14.281784 systemd[1]: Stopped flatcar-metadata-hostname.service. May 17 00:48:14.291466 systemd[1]: Stopping ignition-mount.service... May 17 00:48:14.551000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:14.299934 systemd[1]: Stopping sysroot-boot.service... May 17 00:48:14.559000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:14.304020 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 17 00:48:14.568000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:14.304255 systemd[1]: Stopped systemd-udev-trigger.service. May 17 00:48:14.316762 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 17 00:48:14.317765 systemd[1]: Stopped dracut-pre-trigger.service. May 17 00:48:14.330515 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 17 00:48:14.596000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:14.330929 systemd[1]: Finished initrd-cleanup.service. May 17 00:48:14.346799 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 17 00:48:14.347229 systemd[1]: ignition-mount.service: Deactivated successfully. May 17 00:48:14.626000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:14.347303 systemd[1]: Stopped ignition-mount.service. May 17 00:48:14.643000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:14.355412 systemd[1]: ignition-disks.service: Deactivated successfully. May 17 00:48:14.657954 kernel: hv_netvsc 000d3afc-96de-000d-3afc-96de000d3afc eth0: Data path switched from VF: enP60913s1 May 17 00:48:14.652000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:14.355462 systemd[1]: Stopped ignition-disks.service. May 17 00:48:14.365467 systemd[1]: ignition-kargs.service: Deactivated successfully. May 17 00:48:14.365504 systemd[1]: Stopped ignition-kargs.service. May 17 00:48:14.370418 systemd[1]: ignition-fetch.service: Deactivated successfully. May 17 00:48:14.680000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:14.370450 systemd[1]: Stopped ignition-fetch.service. May 17 00:48:14.690000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:14.387854 systemd[1]: Stopped target network.target. May 17 00:48:14.699000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:14.399048 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 17 00:48:14.707000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:14.707000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:14.399095 systemd[1]: Stopped ignition-fetch-offline.service. May 17 00:48:14.407634 systemd[1]: Stopped target paths.target. May 17 00:48:14.416684 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 17 00:48:14.725000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:14.424671 systemd[1]: Stopped systemd-ask-password-console.path. May 17 00:48:14.429633 systemd[1]: Stopped target slices.target. May 17 00:48:14.438064 systemd[1]: Stopped target sockets.target. May 17 00:48:14.446637 systemd[1]: iscsid.socket: Deactivated successfully. May 17 00:48:14.446668 systemd[1]: Closed iscsid.socket. May 17 00:48:14.453924 systemd[1]: iscsiuio.socket: Deactivated successfully. May 17 00:48:14.453943 systemd[1]: Closed iscsiuio.socket. May 17 00:48:14.463466 systemd[1]: ignition-setup.service: Deactivated successfully. May 17 00:48:14.463510 systemd[1]: Stopped ignition-setup.service. May 17 00:48:14.472142 systemd[1]: Stopping systemd-networkd.service... May 17 00:48:14.480364 systemd[1]: Stopping systemd-resolved.service... May 17 00:48:14.489685 systemd-networkd[844]: eth0: DHCPv6 lease lost May 17 00:48:14.778000 audit: BPF prog-id=9 op=UNLOAD May 17 00:48:14.492869 systemd[1]: systemd-networkd.service: Deactivated successfully. May 17 00:48:14.492994 systemd[1]: Stopped systemd-networkd.service. May 17 00:48:14.506130 systemd[1]: systemd-resolved.service: Deactivated successfully. May 17 00:48:14.509704 systemd[1]: Stopped systemd-resolved.service. May 17 00:48:14.517389 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 17 00:48:14.517435 systemd[1]: Closed systemd-networkd.socket. May 17 00:48:14.530944 systemd[1]: Stopping network-cleanup.service... May 17 00:48:14.541729 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 17 00:48:14.541808 systemd[1]: Stopped parse-ip-for-networkd.service. May 17 00:48:14.551443 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 00:48:14.551491 systemd[1]: Stopped systemd-sysctl.service. May 17 00:48:14.563509 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 17 00:48:14.563577 systemd[1]: Stopped systemd-modules-load.service. May 17 00:48:14.568592 systemd[1]: Stopping systemd-udevd.service... May 17 00:48:14.578642 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 17 00:48:14.588182 systemd[1]: systemd-udevd.service: Deactivated successfully. May 17 00:48:14.588329 systemd[1]: Stopped systemd-udevd.service. May 17 00:48:14.597160 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 17 00:48:14.597200 systemd[1]: Closed systemd-udevd-control.socket. May 17 00:48:14.608477 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 17 00:48:14.608519 systemd[1]: Closed systemd-udevd-kernel.socket. May 17 00:48:14.617008 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 17 00:48:14.617055 systemd[1]: Stopped dracut-pre-udev.service. May 17 00:48:14.892000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:14.626749 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 17 00:48:14.947408 kernel: kauditd_printk_skb: 39 callbacks suppressed May 17 00:48:14.947431 kernel: audit: type=1131 audit(1747442894.892:78): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:14.947443 kernel: audit: type=1131 audit(1747442894.931:79): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:14.931000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:14.626793 systemd[1]: Stopped dracut-cmdline.service. May 17 00:48:14.644237 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:48:14.644285 systemd[1]: Stopped dracut-cmdline-ask.service. May 17 00:48:14.657458 systemd[1]: Starting initrd-udevadm-cleanup-db.service... May 17 00:48:14.671908 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 17 00:48:14.671979 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. May 17 00:48:14.685784 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 17 00:48:14.685861 systemd[1]: Stopped kmod-static-nodes.service. May 17 00:48:14.690395 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:48:14.690437 systemd[1]: Stopped systemd-vconsole-setup.service. May 17 00:48:14.700598 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 17 00:48:14.701076 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 17 00:48:15.013926 iscsid[854]: iscsid shutting down. May 17 00:48:14.701151 systemd[1]: Finished initrd-udevadm-cleanup-db.service. May 17 00:48:14.718759 systemd[1]: network-cleanup.service: Deactivated successfully. May 17 00:48:14.718852 systemd[1]: Stopped network-cleanup.service. May 17 00:48:14.884121 systemd[1]: sysroot-boot.service: Deactivated successfully. May 17 00:48:14.884213 systemd[1]: Stopped sysroot-boot.service. May 17 00:48:14.904815 systemd[1]: Reached target initrd-switch-root.target. May 17 00:48:14.922258 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 17 00:48:14.922309 systemd[1]: Stopped initrd-setup-root.service. May 17 00:48:14.956871 systemd[1]: Starting initrd-switch-root.service... May 17 00:48:14.967848 systemd[1]: Switching root. May 17 00:48:15.014243 systemd-journald[276]: Journal stopped May 17 00:48:26.389832 systemd-journald[276]: Received SIGTERM from PID 1 (n/a). May 17 00:48:26.389853 kernel: SELinux: Class mctp_socket not defined in policy. May 17 00:48:26.389863 kernel: SELinux: Class anon_inode not defined in policy. May 17 00:48:26.389874 kernel: SELinux: the above unknown classes and permissions will be allowed May 17 00:48:26.389884 kernel: SELinux: policy capability network_peer_controls=1 May 17 00:48:26.389892 kernel: SELinux: policy capability open_perms=1 May 17 00:48:26.389901 kernel: SELinux: policy capability extended_socket_class=1 May 17 00:48:26.389909 kernel: SELinux: policy capability always_check_network=0 May 17 00:48:26.389917 kernel: SELinux: policy capability cgroup_seclabel=1 May 17 00:48:26.389925 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 17 00:48:26.389934 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 17 00:48:26.389942 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 17 00:48:26.389950 kernel: audit: type=1403 audit(1747442897.454:80): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 17 00:48:26.389959 systemd[1]: Successfully loaded SELinux policy in 253.450ms. May 17 00:48:26.389970 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 24.743ms. May 17 00:48:26.389982 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 17 00:48:26.389991 systemd[1]: Detected virtualization microsoft. May 17 00:48:26.390000 systemd[1]: Detected architecture arm64. May 17 00:48:26.390008 systemd[1]: Detected first boot. May 17 00:48:26.390018 systemd[1]: Hostname set to . May 17 00:48:26.390027 systemd[1]: Initializing machine ID from random generator. May 17 00:48:26.390036 kernel: audit: type=1400 audit(1747442898.161:81): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 17 00:48:26.390046 kernel: audit: type=1400 audit(1747442898.161:82): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 17 00:48:26.390055 kernel: audit: type=1334 audit(1747442898.178:83): prog-id=10 op=LOAD May 17 00:48:26.390064 kernel: audit: type=1334 audit(1747442898.178:84): prog-id=10 op=UNLOAD May 17 00:48:26.390073 kernel: audit: type=1334 audit(1747442898.195:85): prog-id=11 op=LOAD May 17 00:48:26.390082 kernel: audit: type=1334 audit(1747442898.195:86): prog-id=11 op=UNLOAD May 17 00:48:26.390090 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). May 17 00:48:26.390100 kernel: audit: type=1400 audit(1747442899.396:87): avc: denied { associate } for pid=1075 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" May 17 00:48:26.390110 systemd[1]: Populated /etc with preset unit settings. May 17 00:48:26.390119 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 00:48:26.390129 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 00:48:26.390140 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:48:26.390149 kernel: kauditd_printk_skb: 8 callbacks suppressed May 17 00:48:26.390157 kernel: audit: type=1334 audit(1747442905.657:89): prog-id=12 op=LOAD May 17 00:48:26.390165 kernel: audit: type=1334 audit(1747442905.657:90): prog-id=3 op=UNLOAD May 17 00:48:26.390175 systemd[1]: iscsiuio.service: Deactivated successfully. May 17 00:48:26.390184 kernel: audit: type=1334 audit(1747442905.662:91): prog-id=13 op=LOAD May 17 00:48:26.390193 systemd[1]: Stopped iscsiuio.service. May 17 00:48:26.390202 kernel: audit: type=1334 audit(1747442905.668:92): prog-id=14 op=LOAD May 17 00:48:26.390212 kernel: audit: type=1334 audit(1747442905.668:93): prog-id=4 op=UNLOAD May 17 00:48:26.390221 kernel: audit: type=1334 audit(1747442905.668:94): prog-id=5 op=UNLOAD May 17 00:48:26.390230 systemd[1]: iscsid.service: Deactivated successfully. May 17 00:48:26.390241 kernel: audit: type=1131 audit(1747442905.669:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:26.390250 systemd[1]: Stopped iscsid.service. May 17 00:48:26.390259 kernel: audit: type=1334 audit(1747442905.690:96): prog-id=12 op=UNLOAD May 17 00:48:26.390269 kernel: audit: type=1131 audit(1747442905.709:97): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:26.390279 kernel: audit: type=1131 audit(1747442905.760:98): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:26.390288 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 17 00:48:26.390297 systemd[1]: Stopped initrd-switch-root.service. May 17 00:48:26.390307 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 17 00:48:26.390318 systemd[1]: Created slice system-addon\x2dconfig.slice. May 17 00:48:26.390327 systemd[1]: Created slice system-addon\x2drun.slice. May 17 00:48:26.390337 systemd[1]: Created slice system-getty.slice. May 17 00:48:26.390346 systemd[1]: Created slice system-modprobe.slice. May 17 00:48:26.390355 systemd[1]: Created slice system-serial\x2dgetty.slice. May 17 00:48:26.390365 systemd[1]: Created slice system-system\x2dcloudinit.slice. May 17 00:48:26.390374 systemd[1]: Created slice system-systemd\x2dfsck.slice. May 17 00:48:26.390383 systemd[1]: Created slice user.slice. May 17 00:48:26.390393 systemd[1]: Started systemd-ask-password-console.path. May 17 00:48:26.390403 systemd[1]: Started systemd-ask-password-wall.path. May 17 00:48:26.390412 systemd[1]: Set up automount boot.automount. May 17 00:48:26.390422 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. May 17 00:48:26.390431 systemd[1]: Stopped target initrd-switch-root.target. May 17 00:48:26.390440 systemd[1]: Stopped target initrd-fs.target. May 17 00:48:26.390450 systemd[1]: Stopped target initrd-root-fs.target. May 17 00:48:26.390459 systemd[1]: Reached target integritysetup.target. May 17 00:48:26.390470 systemd[1]: Reached target remote-cryptsetup.target. May 17 00:48:26.390479 systemd[1]: Reached target remote-fs.target. May 17 00:48:26.390488 systemd[1]: Reached target slices.target. May 17 00:48:26.390498 systemd[1]: Reached target swap.target. May 17 00:48:26.390507 systemd[1]: Reached target torcx.target. May 17 00:48:26.390516 systemd[1]: Reached target veritysetup.target. May 17 00:48:26.390527 systemd[1]: Listening on systemd-coredump.socket. May 17 00:48:26.390537 systemd[1]: Listening on systemd-initctl.socket. May 17 00:48:26.390547 systemd[1]: Listening on systemd-networkd.socket. May 17 00:48:26.390556 systemd[1]: Listening on systemd-udevd-control.socket. May 17 00:48:26.390565 systemd[1]: Listening on systemd-udevd-kernel.socket. May 17 00:48:26.390575 systemd[1]: Listening on systemd-userdbd.socket. May 17 00:48:26.390584 systemd[1]: Mounting dev-hugepages.mount... May 17 00:48:26.390593 systemd[1]: Mounting dev-mqueue.mount... May 17 00:48:26.390603 systemd[1]: Mounting media.mount... May 17 00:48:26.390620 systemd[1]: Mounting sys-kernel-debug.mount... May 17 00:48:26.390630 systemd[1]: Mounting sys-kernel-tracing.mount... May 17 00:48:26.390639 systemd[1]: Mounting tmp.mount... May 17 00:48:26.390648 systemd[1]: Starting flatcar-tmpfiles.service... May 17 00:48:26.390658 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:48:26.390667 systemd[1]: Starting kmod-static-nodes.service... May 17 00:48:26.390678 systemd[1]: Starting modprobe@configfs.service... May 17 00:48:26.390687 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:48:26.390697 systemd[1]: Starting modprobe@drm.service... May 17 00:48:26.390707 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:48:26.390716 systemd[1]: Starting modprobe@fuse.service... May 17 00:48:26.390726 systemd[1]: Starting modprobe@loop.service... May 17 00:48:26.390735 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 17 00:48:26.390745 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 17 00:48:26.390755 systemd[1]: Stopped systemd-fsck-root.service. May 17 00:48:26.390764 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 17 00:48:26.390774 systemd[1]: Stopped systemd-fsck-usr.service. May 17 00:48:26.390784 systemd[1]: Stopped systemd-journald.service. May 17 00:48:26.390793 systemd[1]: systemd-journald.service: Consumed 2.926s CPU time. May 17 00:48:26.390803 systemd[1]: Starting systemd-journald.service... May 17 00:48:26.390812 kernel: loop: module loaded May 17 00:48:26.390821 systemd[1]: Starting systemd-modules-load.service... May 17 00:48:26.390830 systemd[1]: Starting systemd-network-generator.service... May 17 00:48:26.390839 systemd[1]: Starting systemd-remount-fs.service... May 17 00:48:26.390848 kernel: fuse: init (API version 7.34) May 17 00:48:26.390858 systemd[1]: Starting systemd-udev-trigger.service... May 17 00:48:26.390868 systemd[1]: verity-setup.service: Deactivated successfully. May 17 00:48:26.390878 systemd[1]: Stopped verity-setup.service. May 17 00:48:26.390888 systemd[1]: Mounted dev-hugepages.mount. May 17 00:48:26.390897 systemd[1]: Mounted dev-mqueue.mount. May 17 00:48:26.390906 systemd[1]: Mounted media.mount. May 17 00:48:26.390915 systemd[1]: Mounted sys-kernel-debug.mount. May 17 00:48:26.390925 systemd[1]: Mounted sys-kernel-tracing.mount. May 17 00:48:26.390934 systemd[1]: Mounted tmp.mount. May 17 00:48:26.390946 systemd-journald[1181]: Journal started May 17 00:48:26.391165 systemd-journald[1181]: Runtime Journal (/run/log/journal/b84370cc6d7549c885daada0b2e19ba4) is 8.0M, max 78.5M, 70.5M free. May 17 00:48:17.454000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 May 17 00:48:18.161000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 17 00:48:18.161000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 17 00:48:18.178000 audit: BPF prog-id=10 op=LOAD May 17 00:48:18.178000 audit: BPF prog-id=10 op=UNLOAD May 17 00:48:18.195000 audit: BPF prog-id=11 op=LOAD May 17 00:48:18.195000 audit: BPF prog-id=11 op=UNLOAD May 17 00:48:19.396000 audit[1075]: AVC avc: denied { associate } for pid=1075 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" May 17 00:48:19.396000 audit[1075]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=40000222fc a1=40000283d8 a2=4000026840 a3=32 items=0 ppid=1058 pid=1075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:48:19.396000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 17 00:48:19.411000 audit[1075]: AVC avc: denied { associate } for pid=1075 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 May 17 00:48:19.411000 audit[1075]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40001456b9 a2=1ed a3=0 items=2 ppid=1058 pid=1075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:48:19.411000 audit: CWD cwd="/" May 17 00:48:19.411000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:48:19.411000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:48:19.411000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 17 00:48:25.657000 audit: BPF prog-id=12 op=LOAD May 17 00:48:25.657000 audit: BPF prog-id=3 op=UNLOAD May 17 00:48:25.662000 audit: BPF prog-id=13 op=LOAD May 17 00:48:25.668000 audit: BPF prog-id=14 op=LOAD May 17 00:48:25.668000 audit: BPF prog-id=4 op=UNLOAD May 17 00:48:25.668000 audit: BPF prog-id=5 op=UNLOAD May 17 00:48:25.669000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:25.690000 audit: BPF prog-id=12 op=UNLOAD May 17 00:48:25.709000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:25.760000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:25.791000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:25.791000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:26.231000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:26.246000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:26.255000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:26.255000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:26.261000 audit: BPF prog-id=15 op=LOAD May 17 00:48:26.261000 audit: BPF prog-id=16 op=LOAD May 17 00:48:26.261000 audit: BPF prog-id=17 op=LOAD May 17 00:48:26.261000 audit: BPF prog-id=13 op=UNLOAD May 17 00:48:26.261000 audit: BPF prog-id=14 op=UNLOAD May 17 00:48:26.345000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:26.387000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 17 00:48:26.387000 audit[1181]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=3 a1=ffffc69e5c90 a2=4000 a3=1 items=0 ppid=1 pid=1181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:48:26.387000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" May 17 00:48:25.656234 systemd[1]: Queued start job for default target multi-user.target. May 17 00:48:19.352177 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2025-05-17T00:48:19Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 17 00:48:25.656247 systemd[1]: Unnecessary job was removed for dev-sda6.device. May 17 00:48:19.352446 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2025-05-17T00:48:19Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 17 00:48:25.670003 systemd[1]: systemd-journald.service: Deactivated successfully. May 17 00:48:19.352469 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2025-05-17T00:48:19Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 17 00:48:25.670343 systemd[1]: systemd-journald.service: Consumed 2.926s CPU time. May 17 00:48:19.352504 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2025-05-17T00:48:19Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" May 17 00:48:19.352513 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2025-05-17T00:48:19Z" level=debug msg="skipped missing lower profile" missing profile=oem May 17 00:48:19.352540 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2025-05-17T00:48:19Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" May 17 00:48:19.352552 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2025-05-17T00:48:19Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= May 17 00:48:19.352764 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2025-05-17T00:48:19Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack May 17 00:48:19.352798 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2025-05-17T00:48:19Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 17 00:48:19.352810 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2025-05-17T00:48:19Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 17 00:48:19.382162 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2025-05-17T00:48:19Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 May 17 00:48:19.382207 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2025-05-17T00:48:19Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl May 17 00:48:19.382227 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2025-05-17T00:48:19Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.7: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.7 May 17 00:48:19.382242 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2025-05-17T00:48:19Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store May 17 00:48:19.382261 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2025-05-17T00:48:19Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.7: no such file or directory" path=/var/lib/torcx/store/3510.3.7 May 17 00:48:19.382273 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2025-05-17T00:48:19Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store May 17 00:48:24.648307 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2025-05-17T00:48:24Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 17 00:48:24.648573 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2025-05-17T00:48:24Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 17 00:48:24.648690 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2025-05-17T00:48:24Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 17 00:48:24.648851 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2025-05-17T00:48:24Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 17 00:48:24.648899 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2025-05-17T00:48:24Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= May 17 00:48:24.648952 /usr/lib/systemd/system-generators/torcx-generator[1075]: time="2025-05-17T00:48:24Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx May 17 00:48:26.403793 systemd[1]: Started systemd-journald.service. May 17 00:48:26.403000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:26.404667 systemd[1]: Finished flatcar-tmpfiles.service. May 17 00:48:26.408000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:26.409297 systemd[1]: Finished kmod-static-nodes.service. May 17 00:48:26.413000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:26.414279 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 17 00:48:26.414398 systemd[1]: Finished modprobe@configfs.service. May 17 00:48:26.418000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:26.418000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:26.419069 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:48:26.419190 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:48:26.423000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:26.423000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:26.423600 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:48:26.423738 systemd[1]: Finished modprobe@drm.service. May 17 00:48:26.427000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:26.427000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:26.428398 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:48:26.428520 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:48:26.433000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:26.433000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:26.433501 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 17 00:48:26.433630 systemd[1]: Finished modprobe@fuse.service. May 17 00:48:26.437000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:26.437000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:26.438030 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:48:26.438150 systemd[1]: Finished modprobe@loop.service. May 17 00:48:26.442000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:26.442000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:26.442601 systemd[1]: Finished systemd-network-generator.service. May 17 00:48:26.447000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:26.448560 systemd[1]: Finished systemd-remount-fs.service. May 17 00:48:26.452000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:26.453449 systemd[1]: Reached target network-pre.target. May 17 00:48:26.459273 systemd[1]: Mounting sys-fs-fuse-connections.mount... May 17 00:48:26.464509 systemd[1]: Mounting sys-kernel-config.mount... May 17 00:48:26.468302 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 17 00:48:26.503257 systemd[1]: Starting systemd-hwdb-update.service... May 17 00:48:26.509605 systemd[1]: Starting systemd-journal-flush.service... May 17 00:48:26.514605 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:48:26.515766 systemd[1]: Starting systemd-random-seed.service... May 17 00:48:26.520013 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 00:48:26.521048 systemd[1]: Starting systemd-sysusers.service... May 17 00:48:26.526826 systemd[1]: Finished systemd-modules-load.service. May 17 00:48:26.531000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:26.531871 systemd[1]: Finished systemd-udev-trigger.service. May 17 00:48:26.536000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:26.536881 systemd[1]: Mounted sys-fs-fuse-connections.mount. May 17 00:48:26.542144 systemd[1]: Mounted sys-kernel-config.mount. May 17 00:48:26.547849 systemd[1]: Starting systemd-sysctl.service... May 17 00:48:26.553084 systemd[1]: Starting systemd-udev-settle.service... May 17 00:48:26.562709 udevadm[1195]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 17 00:48:26.568193 systemd[1]: Finished systemd-random-seed.service. May 17 00:48:26.572000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:26.573225 systemd[1]: Reached target first-boot-complete.target. May 17 00:48:26.584522 systemd-journald[1181]: Time spent on flushing to /var/log/journal/b84370cc6d7549c885daada0b2e19ba4 is 16.153ms for 1093 entries. May 17 00:48:26.584522 systemd-journald[1181]: System Journal (/var/log/journal/b84370cc6d7549c885daada0b2e19ba4) is 8.0M, max 2.6G, 2.6G free. May 17 00:48:26.704040 systemd-journald[1181]: Received client request to flush runtime journal. May 17 00:48:26.638000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:26.634534 systemd[1]: Finished systemd-sysctl.service. May 17 00:48:26.705036 systemd[1]: Finished systemd-journal-flush.service. May 17 00:48:26.709000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:27.137582 systemd[1]: Finished systemd-sysusers.service. May 17 00:48:27.141000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:27.143355 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 17 00:48:27.474588 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 17 00:48:27.479000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:27.748602 systemd[1]: Finished systemd-hwdb-update.service. May 17 00:48:27.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:27.753000 audit: BPF prog-id=18 op=LOAD May 17 00:48:27.753000 audit: BPF prog-id=19 op=LOAD May 17 00:48:27.753000 audit: BPF prog-id=7 op=UNLOAD May 17 00:48:27.753000 audit: BPF prog-id=8 op=UNLOAD May 17 00:48:27.754839 systemd[1]: Starting systemd-udevd.service... May 17 00:48:27.772760 systemd-udevd[1200]: Using default interface naming scheme 'v252'. May 17 00:48:28.035630 systemd[1]: Started systemd-udevd.service. May 17 00:48:28.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:28.045000 audit: BPF prog-id=20 op=LOAD May 17 00:48:28.046968 systemd[1]: Starting systemd-networkd.service... May 17 00:48:28.065926 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. May 17 00:48:28.116000 audit: BPF prog-id=21 op=LOAD May 17 00:48:28.116000 audit: BPF prog-id=22 op=LOAD May 17 00:48:28.116000 audit: BPF prog-id=23 op=LOAD May 17 00:48:28.118197 systemd[1]: Starting systemd-userdbd.service... May 17 00:48:28.122000 audit[1203]: AVC avc: denied { confidentiality } for pid=1203 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 May 17 00:48:28.151654 kernel: hv_vmbus: registering driver hv_balloon May 17 00:48:28.161287 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 May 17 00:48:28.161370 kernel: hv_balloon: Memory hot add disabled on ARM64 May 17 00:48:28.161402 kernel: hv_vmbus: registering driver hyperv_fb May 17 00:48:28.122000 audit[1203]: SYSCALL arch=c00000b7 syscall=105 success=yes exit=0 a0=aaab12216580 a1=aa2c a2=ffff896d24b0 a3=aaab12177010 items=12 ppid=1200 pid=1203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:48:28.122000 audit: CWD cwd="/" May 17 00:48:28.122000 audit: PATH item=0 name=(null) inode=6289 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:48:28.122000 audit: PATH item=1 name=(null) inode=10814 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:48:28.122000 audit: PATH item=2 name=(null) inode=10814 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:48:28.122000 audit: PATH item=3 name=(null) inode=10815 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:48:28.122000 audit: PATH item=4 name=(null) inode=10814 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:48:28.122000 audit: PATH item=5 name=(null) inode=10816 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:48:28.122000 audit: PATH item=6 name=(null) inode=10814 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:48:28.122000 audit: PATH item=7 name=(null) inode=10817 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:48:28.122000 audit: PATH item=8 name=(null) inode=10814 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:48:28.122000 audit: PATH item=9 name=(null) inode=10818 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:48:28.122000 audit: PATH item=10 name=(null) inode=10814 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:48:28.122000 audit: PATH item=11 name=(null) inode=10819 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:48:28.122000 audit: PROCTITLE proctitle="(udev-worker)" May 17 00:48:28.188035 kernel: mousedev: PS/2 mouse device common for all mice May 17 00:48:28.188178 kernel: hv_utils: Registering HyperV Utility Driver May 17 00:48:28.188209 kernel: hv_vmbus: registering driver hv_utils May 17 00:48:28.192882 kernel: hyperv_fb: Synthvid Version major 3, minor 5 May 17 00:48:28.200084 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 May 17 00:48:28.200149 kernel: hv_utils: Heartbeat IC version 3.0 May 17 00:48:28.206809 kernel: hv_utils: Shutdown IC version 3.2 May 17 00:48:28.206893 kernel: Console: switching to colour dummy device 80x25 May 17 00:48:28.211962 systemd[1]: Started systemd-userdbd.service. May 17 00:48:28.220684 kernel: hv_utils: TimeSync IC version 4.0 May 17 00:48:28.220759 kernel: Console: switching to colour frame buffer device 128x48 May 17 00:48:28.219000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:28.020271 systemd-networkd[1221]: lo: Link UP May 17 00:48:28.125043 systemd-journald[1181]: Time jumped backwards, rotating. May 17 00:48:28.125117 kernel: mlx5_core edf1:00:02.0 enP60913s1: Link up May 17 00:48:28.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:28.085000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:28.026132 systemd-networkd[1221]: lo: Gained carrier May 17 00:48:28.026596 systemd-networkd[1221]: Enumeration completed May 17 00:48:28.026686 systemd[1]: Started systemd-networkd.service. May 17 00:48:28.033282 systemd[1]: Starting systemd-networkd-wait-online.service... May 17 00:48:28.058685 systemd-networkd[1221]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:48:28.074156 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 17 00:48:28.081621 systemd[1]: Finished systemd-udev-settle.service. May 17 00:48:28.088180 systemd[1]: Starting lvm2-activation-early.service... May 17 00:48:28.145195 kernel: hv_netvsc 000d3afc-96de-000d-3afc-96de000d3afc eth0: Data path switched to VF: enP60913s1 May 17 00:48:28.145543 systemd-networkd[1221]: enP60913s1: Link UP May 17 00:48:28.145701 systemd-networkd[1221]: eth0: Link UP May 17 00:48:28.145768 systemd-networkd[1221]: eth0: Gained carrier May 17 00:48:28.151477 systemd-networkd[1221]: enP60913s1: Gained carrier May 17 00:48:28.161312 systemd-networkd[1221]: eth0: DHCPv4 address 10.200.20.19/24, gateway 10.200.20.1 acquired from 168.63.129.16 May 17 00:48:28.355318 lvm[1280]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:48:28.382081 systemd[1]: Finished lvm2-activation-early.service. May 17 00:48:28.385000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:28.386937 systemd[1]: Reached target cryptsetup.target. May 17 00:48:28.392351 systemd[1]: Starting lvm2-activation.service... May 17 00:48:28.396355 lvm[1281]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:48:28.420972 systemd[1]: Finished lvm2-activation.service. May 17 00:48:28.425000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:28.426357 systemd[1]: Reached target local-fs-pre.target. May 17 00:48:28.430861 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 17 00:48:28.430889 systemd[1]: Reached target local-fs.target. May 17 00:48:28.435154 systemd[1]: Reached target machines.target. May 17 00:48:28.440716 systemd[1]: Starting ldconfig.service... May 17 00:48:28.444495 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:48:28.444558 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:48:28.445648 systemd[1]: Starting systemd-boot-update.service... May 17 00:48:28.450692 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... May 17 00:48:28.457178 systemd[1]: Starting systemd-machine-id-commit.service... May 17 00:48:28.462676 systemd[1]: Starting systemd-sysext.service... May 17 00:48:28.520895 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1283 (bootctl) May 17 00:48:28.522221 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... May 17 00:48:28.889198 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. May 17 00:48:28.894000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:28.898276 systemd[1]: Unmounting usr-share-oem.mount... May 17 00:48:28.913583 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 17 00:48:28.914560 systemd[1]: Finished systemd-machine-id-commit.service. May 17 00:48:28.918000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:28.948204 systemd[1]: usr-share-oem.mount: Deactivated successfully. May 17 00:48:28.948514 systemd[1]: Unmounted usr-share-oem.mount. May 17 00:48:28.995200 kernel: loop0: detected capacity change from 0 to 211168 May 17 00:48:29.038198 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 17 00:48:29.056201 kernel: loop1: detected capacity change from 0 to 211168 May 17 00:48:29.061658 (sd-sysext)[1295]: Using extensions 'kubernetes'. May 17 00:48:29.062989 (sd-sysext)[1295]: Merged extensions into '/usr'. May 17 00:48:29.079296 systemd[1]: Mounting usr-share-oem.mount... May 17 00:48:29.084005 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:48:29.085348 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:48:29.090750 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:48:29.097375 systemd[1]: Starting modprobe@loop.service... May 17 00:48:29.102985 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:48:29.103228 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:48:29.103429 systemd-fsck[1290]: fsck.fat 4.2 (2021-01-31) May 17 00:48:29.103429 systemd-fsck[1290]: /dev/sda1: 236 files, 117182/258078 clusters May 17 00:48:29.107756 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. May 17 00:48:29.113000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:29.114459 systemd[1]: Mounted usr-share-oem.mount. May 17 00:48:29.118820 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:48:29.118946 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:48:29.122000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:29.122000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:29.123817 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:48:29.123935 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:48:29.127000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:29.127000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:29.129049 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:48:29.129177 systemd[1]: Finished modprobe@loop.service. May 17 00:48:29.132000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:29.132000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:29.135872 systemd[1]: Finished systemd-sysext.service. May 17 00:48:29.139000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:29.142163 systemd[1]: Mounting boot.mount... May 17 00:48:29.146400 systemd[1]: Starting ensure-sysext.service... May 17 00:48:29.153487 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:48:29.153551 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 00:48:29.154469 systemd[1]: Starting systemd-tmpfiles-setup.service... May 17 00:48:29.160869 systemd[1]: Mounted boot.mount. May 17 00:48:29.166369 systemd[1]: Reloading. May 17 00:48:29.167117 systemd-tmpfiles[1306]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. May 17 00:48:29.186657 systemd-tmpfiles[1306]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 17 00:48:29.203785 systemd-tmpfiles[1306]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 17 00:48:29.216914 /usr/lib/systemd/system-generators/torcx-generator[1328]: time="2025-05-17T00:48:29Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 17 00:48:29.216945 /usr/lib/systemd/system-generators/torcx-generator[1328]: time="2025-05-17T00:48:29Z" level=info msg="torcx already run" May 17 00:48:29.292122 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 00:48:29.292142 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 00:48:29.307280 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:48:29.367000 audit: BPF prog-id=24 op=LOAD May 17 00:48:29.367000 audit: BPF prog-id=20 op=UNLOAD May 17 00:48:29.368000 audit: BPF prog-id=25 op=LOAD May 17 00:48:29.368000 audit: BPF prog-id=15 op=UNLOAD May 17 00:48:29.368000 audit: BPF prog-id=26 op=LOAD May 17 00:48:29.368000 audit: BPF prog-id=27 op=LOAD May 17 00:48:29.368000 audit: BPF prog-id=16 op=UNLOAD May 17 00:48:29.368000 audit: BPF prog-id=17 op=UNLOAD May 17 00:48:29.369000 audit: BPF prog-id=28 op=LOAD May 17 00:48:29.369000 audit: BPF prog-id=21 op=UNLOAD May 17 00:48:29.369000 audit: BPF prog-id=29 op=LOAD May 17 00:48:29.369000 audit: BPF prog-id=30 op=LOAD May 17 00:48:29.369000 audit: BPF prog-id=22 op=UNLOAD May 17 00:48:29.369000 audit: BPF prog-id=23 op=UNLOAD May 17 00:48:29.371000 audit: BPF prog-id=31 op=LOAD May 17 00:48:29.371000 audit: BPF prog-id=32 op=LOAD May 17 00:48:29.371000 audit: BPF prog-id=18 op=UNLOAD May 17 00:48:29.371000 audit: BPF prog-id=19 op=UNLOAD May 17 00:48:29.374648 systemd[1]: Finished systemd-boot-update.service. May 17 00:48:29.378000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:29.387542 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:48:29.388728 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:48:29.393796 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:48:29.399203 systemd[1]: Starting modprobe@loop.service... May 17 00:48:29.403016 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:48:29.403137 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:48:29.403913 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:48:29.404049 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:48:29.407000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:29.407000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:29.408850 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:48:29.408963 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:48:29.412000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:29.412000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:29.413910 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:48:29.414021 systemd[1]: Finished modprobe@loop.service. May 17 00:48:29.417000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:29.417000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:29.420375 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:48:29.421489 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:48:29.426499 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:48:29.431862 systemd[1]: Starting modprobe@loop.service... May 17 00:48:29.436111 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:48:29.436265 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:48:29.437022 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:48:29.437236 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:48:29.440000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:29.440000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:29.441980 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:48:29.442095 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:48:29.445000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:29.445000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:29.447149 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:48:29.447368 systemd[1]: Finished modprobe@loop.service. May 17 00:48:29.450000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:29.450000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:29.452072 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:48:29.452379 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 00:48:29.454937 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:48:29.456263 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:48:29.461200 systemd[1]: Starting modprobe@drm.service... May 17 00:48:29.465883 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:48:29.471230 systemd[1]: Starting modprobe@loop.service... May 17 00:48:29.475033 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:48:29.475156 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:48:29.476068 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:48:29.476226 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:48:29.479000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:29.479000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:29.480923 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:48:29.481037 systemd[1]: Finished modprobe@drm.service. May 17 00:48:29.484000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:29.484000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:29.485882 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:48:29.486018 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:48:29.489000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:29.489000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:29.491121 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:48:29.491267 systemd[1]: Finished modprobe@loop.service. May 17 00:48:29.494000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:29.494000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:29.495932 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:48:29.496018 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 00:48:29.497014 systemd[1]: Finished ensure-sysext.service. May 17 00:48:29.500000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:29.761968 systemd[1]: Finished systemd-tmpfiles-setup.service. May 17 00:48:29.766000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:29.768553 systemd[1]: Starting audit-rules.service... May 17 00:48:29.773490 systemd[1]: Starting clean-ca-certificates.service... May 17 00:48:29.779034 systemd[1]: Starting systemd-journal-catalog-update.service... May 17 00:48:29.783000 audit: BPF prog-id=33 op=LOAD May 17 00:48:29.786080 systemd[1]: Starting systemd-resolved.service... May 17 00:48:29.789000 audit: BPF prog-id=34 op=LOAD May 17 00:48:29.792025 systemd[1]: Starting systemd-timesyncd.service... May 17 00:48:29.798815 systemd[1]: Starting systemd-update-utmp.service... May 17 00:48:29.864089 systemd[1]: Started systemd-timesyncd.service. May 17 00:48:29.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:29.867000 audit[1402]: SYSTEM_BOOT pid=1402 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' May 17 00:48:29.871133 systemd[1]: Reached target time-set.target. May 17 00:48:29.881368 systemd[1]: Finished clean-ca-certificates.service. May 17 00:48:29.885000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:29.886516 systemd[1]: Finished systemd-update-utmp.service. May 17 00:48:29.890000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:29.891257 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 00:48:29.901531 systemd[1]: Finished systemd-journal-catalog-update.service. May 17 00:48:29.906000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:29.940230 systemd-resolved[1399]: Positive Trust Anchors: May 17 00:48:29.940642 systemd-resolved[1399]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:48:29.940741 systemd-resolved[1399]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 17 00:48:30.075607 systemd-resolved[1399]: Using system hostname 'ci-3510.3.7-n-5e40c0776b'. May 17 00:48:30.077248 systemd[1]: Started systemd-resolved.service. May 17 00:48:30.078709 systemd-timesyncd[1400]: Contacted time server 172.235.32.243:123 (0.flatcar.pool.ntp.org). May 17 00:48:30.079001 systemd-timesyncd[1400]: Initial clock synchronization to Sat 2025-05-17 00:48:30.078365 UTC. May 17 00:48:30.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:30.082023 systemd[1]: Reached target network.target. May 17 00:48:30.086453 systemd[1]: Reached target nss-lookup.target. May 17 00:48:30.115401 systemd-networkd[1221]: eth0: Gained IPv6LL May 17 00:48:30.117373 systemd[1]: Finished systemd-networkd-wait-online.service. May 17 00:48:30.121000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:48:30.122651 systemd[1]: Reached target network-online.target. May 17 00:48:30.133002 augenrules[1417]: No rules May 17 00:48:30.131000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 May 17 00:48:30.131000 audit[1417]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffd89af1b0 a2=420 a3=0 items=0 ppid=1396 pid=1417 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:48:30.131000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 May 17 00:48:30.134053 systemd[1]: Finished audit-rules.service. May 17 00:48:35.820915 ldconfig[1282]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 17 00:48:35.840236 systemd[1]: Finished ldconfig.service. May 17 00:48:35.846539 systemd[1]: Starting systemd-update-done.service... May 17 00:48:35.870002 systemd[1]: Finished systemd-update-done.service. May 17 00:48:35.875250 systemd[1]: Reached target sysinit.target. May 17 00:48:35.880079 systemd[1]: Started motdgen.path. May 17 00:48:35.883851 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. May 17 00:48:35.890473 systemd[1]: Started logrotate.timer. May 17 00:48:35.894860 systemd[1]: Started mdadm.timer. May 17 00:48:35.898557 systemd[1]: Started systemd-tmpfiles-clean.timer. May 17 00:48:35.903223 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 17 00:48:35.903260 systemd[1]: Reached target paths.target. May 17 00:48:35.907474 systemd[1]: Reached target timers.target. May 17 00:48:35.912956 systemd[1]: Listening on dbus.socket. May 17 00:48:35.917928 systemd[1]: Starting docker.socket... May 17 00:48:35.951040 systemd[1]: Listening on sshd.socket. May 17 00:48:35.955223 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:48:35.955719 systemd[1]: Listening on docker.socket. May 17 00:48:35.960005 systemd[1]: Reached target sockets.target. May 17 00:48:35.964341 systemd[1]: Reached target basic.target. May 17 00:48:35.968493 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. May 17 00:48:35.968523 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. May 17 00:48:35.969540 systemd[1]: Starting containerd.service... May 17 00:48:35.974354 systemd[1]: Starting dbus.service... May 17 00:48:35.978432 systemd[1]: Starting enable-oem-cloudinit.service... May 17 00:48:35.983628 systemd[1]: Starting extend-filesystems.service... May 17 00:48:35.991355 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). May 17 00:48:35.992549 systemd[1]: Starting kubelet.service... May 17 00:48:35.997097 systemd[1]: Starting motdgen.service... May 17 00:48:36.001584 systemd[1]: Started nvidia.service. May 17 00:48:36.006794 systemd[1]: Starting prepare-helm.service... May 17 00:48:36.011663 systemd[1]: Starting ssh-key-proc-cmdline.service... May 17 00:48:36.017307 systemd[1]: Starting sshd-keygen.service... May 17 00:48:36.024340 systemd[1]: Starting systemd-logind.service... May 17 00:48:36.028706 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:48:36.028773 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 17 00:48:36.029200 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 17 00:48:36.031814 systemd[1]: Starting update-engine.service... May 17 00:48:36.036813 jq[1427]: false May 17 00:48:36.037988 systemd[1]: Starting update-ssh-keys-after-ignition.service... May 17 00:48:36.040491 jq[1444]: true May 17 00:48:36.050600 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 17 00:48:36.050798 systemd[1]: Finished ssh-key-proc-cmdline.service. May 17 00:48:36.058385 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 17 00:48:36.058715 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. May 17 00:48:36.070127 systemd[1]: motdgen.service: Deactivated successfully. May 17 00:48:36.070317 systemd[1]: Finished motdgen.service. May 17 00:48:36.081408 extend-filesystems[1428]: Found loop1 May 17 00:48:36.081408 extend-filesystems[1428]: Found sda May 17 00:48:36.081408 extend-filesystems[1428]: Found sda1 May 17 00:48:36.081408 extend-filesystems[1428]: Found sda2 May 17 00:48:36.081408 extend-filesystems[1428]: Found sda3 May 17 00:48:36.081408 extend-filesystems[1428]: Found usr May 17 00:48:36.081408 extend-filesystems[1428]: Found sda4 May 17 00:48:36.121661 extend-filesystems[1428]: Found sda6 May 17 00:48:36.121661 extend-filesystems[1428]: Found sda7 May 17 00:48:36.121661 extend-filesystems[1428]: Found sda9 May 17 00:48:36.121661 extend-filesystems[1428]: Checking size of /dev/sda9 May 17 00:48:36.141793 jq[1450]: true May 17 00:48:36.146683 env[1452]: time="2025-05-17T00:48:36.144904120Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 May 17 00:48:36.166698 systemd-logind[1440]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) May 17 00:48:36.170455 env[1452]: time="2025-05-17T00:48:36.170420265Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 17 00:48:36.171714 env[1452]: time="2025-05-17T00:48:36.171686971Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 17 00:48:36.173293 env[1452]: time="2025-05-17T00:48:36.173255433Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.182-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 17 00:48:36.173374 env[1452]: time="2025-05-17T00:48:36.173360352Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 17 00:48:36.173639 env[1452]: time="2025-05-17T00:48:36.173617869Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:48:36.173724 env[1452]: time="2025-05-17T00:48:36.173710148Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 17 00:48:36.173786 env[1452]: time="2025-05-17T00:48:36.173771827Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" May 17 00:48:36.173838 env[1452]: time="2025-05-17T00:48:36.173824826Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 17 00:48:36.173963 env[1452]: time="2025-05-17T00:48:36.173948145Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 17 00:48:36.174261 env[1452]: time="2025-05-17T00:48:36.174242661Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 17 00:48:36.174465 env[1452]: time="2025-05-17T00:48:36.174445539Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:48:36.174535 env[1452]: time="2025-05-17T00:48:36.174521658Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 17 00:48:36.174644 env[1452]: time="2025-05-17T00:48:36.174627377Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" May 17 00:48:36.174708 env[1452]: time="2025-05-17T00:48:36.174694736Z" level=info msg="metadata content store policy set" policy=shared May 17 00:48:36.176119 systemd-logind[1440]: New seat seat0. May 17 00:48:36.182499 extend-filesystems[1428]: Old size kept for /dev/sda9 May 17 00:48:36.182499 extend-filesystems[1428]: Found sr0 May 17 00:48:36.219379 tar[1448]: linux-arm64/LICENSE May 17 00:48:36.219379 tar[1448]: linux-arm64/helm May 17 00:48:36.187721 systemd[1]: extend-filesystems.service: Deactivated successfully. May 17 00:48:36.187899 systemd[1]: Finished extend-filesystems.service. May 17 00:48:36.225197 env[1452]: time="2025-05-17T00:48:36.222593304Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 17 00:48:36.225197 env[1452]: time="2025-05-17T00:48:36.222635224Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 17 00:48:36.225197 env[1452]: time="2025-05-17T00:48:36.222649864Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 17 00:48:36.225197 env[1452]: time="2025-05-17T00:48:36.222688703Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 17 00:48:36.225197 env[1452]: time="2025-05-17T00:48:36.222705423Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 17 00:48:36.225197 env[1452]: time="2025-05-17T00:48:36.222719943Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 17 00:48:36.225197 env[1452]: time="2025-05-17T00:48:36.222810542Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 17 00:48:36.225197 env[1452]: time="2025-05-17T00:48:36.223157738Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 17 00:48:36.225197 env[1452]: time="2025-05-17T00:48:36.223201697Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 May 17 00:48:36.225197 env[1452]: time="2025-05-17T00:48:36.223218577Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 17 00:48:36.225197 env[1452]: time="2025-05-17T00:48:36.223231577Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 17 00:48:36.225197 env[1452]: time="2025-05-17T00:48:36.223244057Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 17 00:48:36.225197 env[1452]: time="2025-05-17T00:48:36.223367335Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 17 00:48:36.225197 env[1452]: time="2025-05-17T00:48:36.223441414Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 17 00:48:36.226086 env[1452]: time="2025-05-17T00:48:36.223663612Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 17 00:48:36.226086 env[1452]: time="2025-05-17T00:48:36.223688612Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 17 00:48:36.226086 env[1452]: time="2025-05-17T00:48:36.223701571Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 17 00:48:36.226086 env[1452]: time="2025-05-17T00:48:36.223751411Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 17 00:48:36.226086 env[1452]: time="2025-05-17T00:48:36.223764451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 17 00:48:36.226086 env[1452]: time="2025-05-17T00:48:36.223776291Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 17 00:48:36.226086 env[1452]: time="2025-05-17T00:48:36.223786770Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 17 00:48:36.226086 env[1452]: time="2025-05-17T00:48:36.223798970Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 17 00:48:36.226086 env[1452]: time="2025-05-17T00:48:36.223810690Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 17 00:48:36.226086 env[1452]: time="2025-05-17T00:48:36.223821610Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 17 00:48:36.226086 env[1452]: time="2025-05-17T00:48:36.223832010Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 17 00:48:36.226086 env[1452]: time="2025-05-17T00:48:36.223845370Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 17 00:48:36.226086 env[1452]: time="2025-05-17T00:48:36.223956208Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 17 00:48:36.226086 env[1452]: time="2025-05-17T00:48:36.223973688Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 17 00:48:36.226086 env[1452]: time="2025-05-17T00:48:36.223986728Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 17 00:48:36.226380 env[1452]: time="2025-05-17T00:48:36.223998088Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 17 00:48:36.226380 env[1452]: time="2025-05-17T00:48:36.224011608Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 May 17 00:48:36.226380 env[1452]: time="2025-05-17T00:48:36.224021888Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 17 00:48:36.226380 env[1452]: time="2025-05-17T00:48:36.224039888Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" May 17 00:48:36.226380 env[1452]: time="2025-05-17T00:48:36.224119207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 17 00:48:36.226475 env[1452]: time="2025-05-17T00:48:36.224504362Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 17 00:48:36.226475 env[1452]: time="2025-05-17T00:48:36.224572081Z" level=info msg="Connect containerd service" May 17 00:48:36.226475 env[1452]: time="2025-05-17T00:48:36.224604241Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 17 00:48:36.253234 env[1452]: time="2025-05-17T00:48:36.227239091Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 00:48:36.253234 env[1452]: time="2025-05-17T00:48:36.233551058Z" level=info msg="Start subscribing containerd event" May 17 00:48:36.253234 env[1452]: time="2025-05-17T00:48:36.233604337Z" level=info msg="Start recovering state" May 17 00:48:36.253234 env[1452]: time="2025-05-17T00:48:36.233666297Z" level=info msg="Start event monitor" May 17 00:48:36.253234 env[1452]: time="2025-05-17T00:48:36.233684376Z" level=info msg="Start snapshots syncer" May 17 00:48:36.253234 env[1452]: time="2025-05-17T00:48:36.233693856Z" level=info msg="Start cni network conf syncer for default" May 17 00:48:36.253234 env[1452]: time="2025-05-17T00:48:36.233700616Z" level=info msg="Start streaming server" May 17 00:48:36.253234 env[1452]: time="2025-05-17T00:48:36.239318111Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 17 00:48:36.253234 env[1452]: time="2025-05-17T00:48:36.239428470Z" level=info msg=serving... address=/run/containerd/containerd.sock May 17 00:48:36.227611 systemd[1]: Finished update-ssh-keys-after-ignition.service. May 17 00:48:36.253471 bash[1478]: Updated "/home/core/.ssh/authorized_keys" May 17 00:48:36.239588 systemd[1]: Started containerd.service. May 17 00:48:36.255652 env[1452]: time="2025-05-17T00:48:36.255615204Z" level=info msg="containerd successfully booted in 0.111642s" May 17 00:48:36.314866 dbus-daemon[1426]: [system] SELinux support is enabled May 17 00:48:36.321398 dbus-daemon[1426]: [system] Successfully activated service 'org.freedesktop.systemd1' May 17 00:48:36.315018 systemd[1]: Started dbus.service. May 17 00:48:36.320868 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 17 00:48:36.320889 systemd[1]: Reached target system-config.target. May 17 00:48:36.328790 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 17 00:48:36.328810 systemd[1]: Reached target user-config.target. May 17 00:48:36.334530 systemd[1]: Started systemd-logind.service. May 17 00:48:36.387886 systemd[1]: nvidia.service: Deactivated successfully. May 17 00:48:36.813843 update_engine[1443]: I0517 00:48:36.800194 1443 main.cc:92] Flatcar Update Engine starting May 17 00:48:36.866552 systemd[1]: Started update-engine.service. May 17 00:48:36.877152 update_engine[1443]: I0517 00:48:36.866577 1443 update_check_scheduler.cc:74] Next update check in 7m26s May 17 00:48:36.873293 systemd[1]: Started locksmithd.service. May 17 00:48:36.937546 tar[1448]: linux-arm64/README.md May 17 00:48:36.942953 systemd[1]: Finished prepare-helm.service. May 17 00:48:37.071956 systemd[1]: Started kubelet.service. May 17 00:48:37.484647 kubelet[1538]: E0517 00:48:37.484561 1538 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:48:37.486646 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:48:37.486759 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:48:38.227930 locksmithd[1534]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 17 00:48:39.335128 sshd_keygen[1445]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 17 00:48:39.351483 systemd[1]: Finished sshd-keygen.service. May 17 00:48:39.357773 systemd[1]: Starting issuegen.service... May 17 00:48:39.363297 systemd[1]: Started waagent.service. May 17 00:48:39.367722 systemd[1]: issuegen.service: Deactivated successfully. May 17 00:48:39.367881 systemd[1]: Finished issuegen.service. May 17 00:48:39.373290 systemd[1]: Starting systemd-user-sessions.service... May 17 00:48:39.409075 systemd[1]: Finished systemd-user-sessions.service. May 17 00:48:39.415610 systemd[1]: Started getty@tty1.service. May 17 00:48:39.421009 systemd[1]: Started serial-getty@ttyAMA0.service. May 17 00:48:39.425955 systemd[1]: Reached target getty.target. May 17 00:48:39.430114 systemd[1]: Reached target multi-user.target. May 17 00:48:39.435747 systemd[1]: Starting systemd-update-utmp-runlevel.service... May 17 00:48:39.446859 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. May 17 00:48:39.447045 systemd[1]: Finished systemd-update-utmp-runlevel.service. May 17 00:48:39.452615 systemd[1]: Startup finished in 715ms (kernel) + 14.366s (initrd) + 22.894s (userspace) = 37.976s. May 17 00:48:40.070629 login[1561]: pam_lastlog(login:session): file /var/log/lastlog is locked/write May 17 00:48:40.072331 login[1562]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 17 00:48:40.119467 systemd[1]: Created slice user-500.slice. May 17 00:48:40.120525 systemd[1]: Starting user-runtime-dir@500.service... May 17 00:48:40.123471 systemd-logind[1440]: New session 2 of user core. May 17 00:48:40.162427 systemd[1]: Finished user-runtime-dir@500.service. May 17 00:48:40.163717 systemd[1]: Starting user@500.service... May 17 00:48:40.195151 (systemd)[1565]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 17 00:48:40.379915 systemd[1565]: Queued start job for default target default.target. May 17 00:48:40.380391 systemd[1565]: Reached target paths.target. May 17 00:48:40.380411 systemd[1565]: Reached target sockets.target. May 17 00:48:40.380422 systemd[1565]: Reached target timers.target. May 17 00:48:40.380432 systemd[1565]: Reached target basic.target. May 17 00:48:40.380470 systemd[1565]: Reached target default.target. May 17 00:48:40.380493 systemd[1565]: Startup finished in 179ms. May 17 00:48:40.380539 systemd[1]: Started user@500.service. May 17 00:48:40.381442 systemd[1]: Started session-2.scope. May 17 00:48:41.071025 login[1561]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 17 00:48:41.075252 systemd[1]: Started session-1.scope. May 17 00:48:41.075685 systemd-logind[1440]: New session 1 of user core. May 17 00:48:46.356682 waagent[1559]: 2025-05-17T00:48:46.356581Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 May 17 00:48:46.363870 waagent[1559]: 2025-05-17T00:48:46.363799Z INFO Daemon Daemon OS: flatcar 3510.3.7 May 17 00:48:46.368665 waagent[1559]: 2025-05-17T00:48:46.368608Z INFO Daemon Daemon Python: 3.9.16 May 17 00:48:46.373415 waagent[1559]: 2025-05-17T00:48:46.373353Z INFO Daemon Daemon Run daemon May 17 00:48:46.378101 waagent[1559]: 2025-05-17T00:48:46.378045Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.7' May 17 00:48:46.394385 waagent[1559]: 2025-05-17T00:48:46.394264Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. May 17 00:48:46.412501 waagent[1559]: 2025-05-17T00:48:46.412382Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' May 17 00:48:46.422429 waagent[1559]: 2025-05-17T00:48:46.422366Z INFO Daemon Daemon cloud-init is enabled: False May 17 00:48:46.427646 waagent[1559]: 2025-05-17T00:48:46.427587Z INFO Daemon Daemon Using waagent for provisioning May 17 00:48:46.433349 waagent[1559]: 2025-05-17T00:48:46.433290Z INFO Daemon Daemon Activate resource disk May 17 00:48:46.438277 waagent[1559]: 2025-05-17T00:48:46.438222Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb May 17 00:48:46.452669 waagent[1559]: 2025-05-17T00:48:46.452607Z INFO Daemon Daemon Found device: None May 17 00:48:46.457317 waagent[1559]: 2025-05-17T00:48:46.457257Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology May 17 00:48:46.465678 waagent[1559]: 2025-05-17T00:48:46.465621Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 May 17 00:48:46.477837 waagent[1559]: 2025-05-17T00:48:46.477776Z INFO Daemon Daemon Clean protocol and wireserver endpoint May 17 00:48:46.483701 waagent[1559]: 2025-05-17T00:48:46.483643Z INFO Daemon Daemon Running default provisioning handler May 17 00:48:46.496567 waagent[1559]: 2025-05-17T00:48:46.496447Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. May 17 00:48:46.515360 waagent[1559]: 2025-05-17T00:48:46.515222Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' May 17 00:48:46.526563 waagent[1559]: 2025-05-17T00:48:46.526482Z INFO Daemon Daemon cloud-init is enabled: False May 17 00:48:46.531801 waagent[1559]: 2025-05-17T00:48:46.531735Z INFO Daemon Daemon Copying ovf-env.xml May 17 00:48:46.825546 waagent[1559]: 2025-05-17T00:48:46.825424Z INFO Daemon Daemon Successfully mounted dvd May 17 00:48:46.919765 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. May 17 00:48:46.958239 waagent[1559]: 2025-05-17T00:48:46.958092Z INFO Daemon Daemon Detect protocol endpoint May 17 00:48:46.963636 waagent[1559]: 2025-05-17T00:48:46.963569Z INFO Daemon Daemon Clean protocol and wireserver endpoint May 17 00:48:46.969604 waagent[1559]: 2025-05-17T00:48:46.969546Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler May 17 00:48:46.976261 waagent[1559]: 2025-05-17T00:48:46.976205Z INFO Daemon Daemon Test for route to 168.63.129.16 May 17 00:48:46.981844 waagent[1559]: 2025-05-17T00:48:46.981788Z INFO Daemon Daemon Route to 168.63.129.16 exists May 17 00:48:46.986990 waagent[1559]: 2025-05-17T00:48:46.986934Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 May 17 00:48:47.146210 waagent[1559]: 2025-05-17T00:48:47.146068Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 May 17 00:48:47.153654 waagent[1559]: 2025-05-17T00:48:47.153611Z INFO Daemon Daemon Wire protocol version:2012-11-30 May 17 00:48:47.159037 waagent[1559]: 2025-05-17T00:48:47.158983Z INFO Daemon Daemon Server preferred version:2015-04-05 May 17 00:48:47.575662 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 17 00:48:47.575841 systemd[1]: Stopped kubelet.service. May 17 00:48:47.577220 systemd[1]: Starting kubelet.service... May 17 00:48:47.688651 systemd[1]: Started kubelet.service. May 17 00:48:47.810195 kubelet[1607]: E0517 00:48:47.810135 1607 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:48:47.812759 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:48:47.812890 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:48:47.882657 waagent[1559]: 2025-05-17T00:48:47.882465Z INFO Daemon Daemon Initializing goal state during protocol detection May 17 00:48:47.898817 waagent[1559]: 2025-05-17T00:48:47.898748Z INFO Daemon Daemon Forcing an update of the goal state.. May 17 00:48:47.904730 waagent[1559]: 2025-05-17T00:48:47.904665Z INFO Daemon Daemon Fetching goal state [incarnation 1] May 17 00:48:48.155607 waagent[1559]: 2025-05-17T00:48:48.155432Z INFO Daemon Daemon Found private key matching thumbprint 46CA0264ED19F58C65FCBFB74A01C6AE1ED57C3A May 17 00:48:48.163980 waagent[1559]: 2025-05-17T00:48:48.163919Z INFO Daemon Daemon Certificate with thumbprint 3905407BA49F47EFC368E587E46F6102CC1CE3B0 has no matching private key. May 17 00:48:48.175828 waagent[1559]: 2025-05-17T00:48:48.175757Z INFO Daemon Daemon Fetch goal state completed May 17 00:48:48.247225 waagent[1559]: 2025-05-17T00:48:48.247149Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: 95b25db6-472b-4002-bc56-3f93f79601dc New eTag: 12862012994193266536] May 17 00:48:48.258149 waagent[1559]: 2025-05-17T00:48:48.258084Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob May 17 00:48:48.273980 waagent[1559]: 2025-05-17T00:48:48.273923Z INFO Daemon Daemon Starting provisioning May 17 00:48:48.278932 waagent[1559]: 2025-05-17T00:48:48.278874Z INFO Daemon Daemon Handle ovf-env.xml. May 17 00:48:48.283704 waagent[1559]: 2025-05-17T00:48:48.283652Z INFO Daemon Daemon Set hostname [ci-3510.3.7-n-5e40c0776b] May 17 00:48:48.324638 waagent[1559]: 2025-05-17T00:48:48.324518Z INFO Daemon Daemon Publish hostname [ci-3510.3.7-n-5e40c0776b] May 17 00:48:48.331653 waagent[1559]: 2025-05-17T00:48:48.331582Z INFO Daemon Daemon Examine /proc/net/route for primary interface May 17 00:48:48.338408 waagent[1559]: 2025-05-17T00:48:48.338349Z INFO Daemon Daemon Primary interface is [eth0] May 17 00:48:48.354465 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. May 17 00:48:48.354642 systemd[1]: Stopped systemd-networkd-wait-online.service. May 17 00:48:48.354695 systemd[1]: Stopping systemd-networkd-wait-online.service... May 17 00:48:48.354928 systemd[1]: Stopping systemd-networkd.service... May 17 00:48:48.360210 systemd-networkd[1221]: eth0: DHCPv6 lease lost May 17 00:48:48.361573 systemd[1]: systemd-networkd.service: Deactivated successfully. May 17 00:48:48.361739 systemd[1]: Stopped systemd-networkd.service. May 17 00:48:48.363618 systemd[1]: Starting systemd-networkd.service... May 17 00:48:48.390332 systemd-networkd[1623]: enP60913s1: Link UP May 17 00:48:48.390542 systemd-networkd[1623]: enP60913s1: Gained carrier May 17 00:48:48.391501 systemd-networkd[1623]: eth0: Link UP May 17 00:48:48.391585 systemd-networkd[1623]: eth0: Gained carrier May 17 00:48:48.391954 systemd-networkd[1623]: lo: Link UP May 17 00:48:48.392011 systemd-networkd[1623]: lo: Gained carrier May 17 00:48:48.392317 systemd-networkd[1623]: eth0: Gained IPv6LL May 17 00:48:48.393260 systemd-networkd[1623]: Enumeration completed May 17 00:48:48.393435 systemd[1]: Started systemd-networkd.service. May 17 00:48:48.394818 systemd-networkd[1623]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:48:48.394945 systemd[1]: Starting systemd-networkd-wait-online.service... May 17 00:48:48.398368 waagent[1559]: 2025-05-17T00:48:48.398227Z INFO Daemon Daemon Create user account if not exists May 17 00:48:48.404590 waagent[1559]: 2025-05-17T00:48:48.404499Z INFO Daemon Daemon User core already exists, skip useradd May 17 00:48:48.411054 waagent[1559]: 2025-05-17T00:48:48.410957Z INFO Daemon Daemon Configure sudoer May 17 00:48:48.416034 waagent[1559]: 2025-05-17T00:48:48.415972Z INFO Daemon Daemon Configure sshd May 17 00:48:48.420279 waagent[1559]: 2025-05-17T00:48:48.420218Z INFO Daemon Daemon Deploy ssh public key. May 17 00:48:48.426246 systemd-networkd[1623]: eth0: DHCPv4 address 10.200.20.19/24, gateway 10.200.20.1 acquired from 168.63.129.16 May 17 00:48:48.438057 systemd[1]: Finished systemd-networkd-wait-online.service. May 17 00:48:49.588175 waagent[1559]: 2025-05-17T00:48:49.588098Z INFO Daemon Daemon Provisioning complete May 17 00:48:49.608125 waagent[1559]: 2025-05-17T00:48:49.608062Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping May 17 00:48:49.614408 waagent[1559]: 2025-05-17T00:48:49.614347Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. May 17 00:48:49.624866 waagent[1559]: 2025-05-17T00:48:49.624805Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent May 17 00:48:49.919470 waagent[1632]: 2025-05-17T00:48:49.919328Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent May 17 00:48:49.920532 waagent[1632]: 2025-05-17T00:48:49.920478Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 17 00:48:49.920763 waagent[1632]: 2025-05-17T00:48:49.920716Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 May 17 00:48:49.933166 waagent[1632]: 2025-05-17T00:48:49.933092Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. May 17 00:48:49.933486 waagent[1632]: 2025-05-17T00:48:49.933437Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] May 17 00:48:50.001115 waagent[1632]: 2025-05-17T00:48:50.000982Z INFO ExtHandler ExtHandler Found private key matching thumbprint 46CA0264ED19F58C65FCBFB74A01C6AE1ED57C3A May 17 00:48:50.001487 waagent[1632]: 2025-05-17T00:48:50.001435Z INFO ExtHandler ExtHandler Certificate with thumbprint 3905407BA49F47EFC368E587E46F6102CC1CE3B0 has no matching private key. May 17 00:48:50.001797 waagent[1632]: 2025-05-17T00:48:50.001750Z INFO ExtHandler ExtHandler Fetch goal state completed May 17 00:48:50.016370 waagent[1632]: 2025-05-17T00:48:50.016314Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: 1e07ca22-a6b8-4c23-abb9-b508c3125293 New eTag: 12862012994193266536] May 17 00:48:50.017042 waagent[1632]: 2025-05-17T00:48:50.016986Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob May 17 00:48:50.109800 waagent[1632]: 2025-05-17T00:48:50.109672Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.7; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; May 17 00:48:50.132428 waagent[1632]: 2025-05-17T00:48:50.132343Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1632 May 17 00:48:50.136314 waagent[1632]: 2025-05-17T00:48:50.136247Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.7', '', 'Flatcar Container Linux by Kinvolk'] May 17 00:48:50.137767 waagent[1632]: 2025-05-17T00:48:50.137711Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules May 17 00:48:50.277650 waagent[1632]: 2025-05-17T00:48:50.277544Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service May 17 00:48:50.278201 waagent[1632]: 2025-05-17T00:48:50.278130Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup May 17 00:48:50.285579 waagent[1632]: 2025-05-17T00:48:50.285530Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now May 17 00:48:50.286145 waagent[1632]: 2025-05-17T00:48:50.286092Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' May 17 00:48:50.287406 waagent[1632]: 2025-05-17T00:48:50.287349Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] May 17 00:48:50.288808 waagent[1632]: 2025-05-17T00:48:50.288740Z INFO ExtHandler ExtHandler Starting env monitor service. May 17 00:48:50.289086 waagent[1632]: 2025-05-17T00:48:50.289017Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 17 00:48:50.289654 waagent[1632]: 2025-05-17T00:48:50.289583Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 May 17 00:48:50.290240 waagent[1632]: 2025-05-17T00:48:50.290156Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. May 17 00:48:50.290690 waagent[1632]: 2025-05-17T00:48:50.290627Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. May 17 00:48:50.291725 waagent[1632]: 2025-05-17T00:48:50.291576Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread May 17 00:48:50.291875 waagent[1632]: 2025-05-17T00:48:50.291815Z INFO ExtHandler ExtHandler Start Extension Telemetry service. May 17 00:48:50.292039 waagent[1632]: 2025-05-17T00:48:50.291979Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: May 17 00:48:50.292039 waagent[1632]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT May 17 00:48:50.292039 waagent[1632]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 May 17 00:48:50.292039 waagent[1632]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 May 17 00:48:50.292039 waagent[1632]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 May 17 00:48:50.292039 waagent[1632]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 May 17 00:48:50.292039 waagent[1632]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 May 17 00:48:50.293754 waagent[1632]: 2025-05-17T00:48:50.293568Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 17 00:48:50.294277 waagent[1632]: 2025-05-17T00:48:50.294208Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 May 17 00:48:50.295229 waagent[1632]: 2025-05-17T00:48:50.295135Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True May 17 00:48:50.295445 waagent[1632]: 2025-05-17T00:48:50.295360Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. May 17 00:48:50.295590 waagent[1632]: 2025-05-17T00:48:50.295530Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread May 17 00:48:50.296794 waagent[1632]: 2025-05-17T00:48:50.296715Z INFO EnvHandler ExtHandler Configure routes May 17 00:48:50.300226 waagent[1632]: 2025-05-17T00:48:50.300144Z INFO EnvHandler ExtHandler Gateway:None May 17 00:48:50.301916 waagent[1632]: 2025-05-17T00:48:50.301840Z INFO EnvHandler ExtHandler Routes:None May 17 00:48:50.310390 waagent[1632]: 2025-05-17T00:48:50.310333Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) May 17 00:48:50.311103 waagent[1632]: 2025-05-17T00:48:50.311054Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required May 17 00:48:50.312135 waagent[1632]: 2025-05-17T00:48:50.312081Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' May 17 00:48:50.348252 waagent[1632]: 2025-05-17T00:48:50.348109Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1623' May 17 00:48:50.362275 waagent[1632]: 2025-05-17T00:48:50.362206Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. May 17 00:48:50.454384 waagent[1632]: 2025-05-17T00:48:50.454259Z INFO MonitorHandler ExtHandler Network interfaces: May 17 00:48:50.454384 waagent[1632]: Executing ['ip', '-a', '-o', 'link']: May 17 00:48:50.454384 waagent[1632]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 May 17 00:48:50.454384 waagent[1632]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:fc:96:de brd ff:ff:ff:ff:ff:ff May 17 00:48:50.454384 waagent[1632]: 3: enP60913s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:fc:96:de brd ff:ff:ff:ff:ff:ff\ altname enP60913p0s2 May 17 00:48:50.454384 waagent[1632]: Executing ['ip', '-4', '-a', '-o', 'address']: May 17 00:48:50.454384 waagent[1632]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever May 17 00:48:50.454384 waagent[1632]: 2: eth0 inet 10.200.20.19/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever May 17 00:48:50.454384 waagent[1632]: Executing ['ip', '-6', '-a', '-o', 'address']: May 17 00:48:50.454384 waagent[1632]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever May 17 00:48:50.454384 waagent[1632]: 2: eth0 inet6 fe80::20d:3aff:fefc:96de/64 scope link \ valid_lft forever preferred_lft forever May 17 00:48:50.655160 waagent[1632]: 2025-05-17T00:48:50.655093Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.13.1.1 -- exiting May 17 00:48:51.629435 waagent[1559]: 2025-05-17T00:48:51.629271Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running May 17 00:48:51.633993 waagent[1559]: 2025-05-17T00:48:51.633941Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.13.1.1 to be the latest agent May 17 00:48:52.896997 waagent[1664]: 2025-05-17T00:48:52.896907Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.13.1.1) May 17 00:48:52.898656 waagent[1664]: 2025-05-17T00:48:52.898601Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.7 May 17 00:48:52.898887 waagent[1664]: 2025-05-17T00:48:52.898839Z INFO ExtHandler ExtHandler Python: 3.9.16 May 17 00:48:52.899095 waagent[1664]: 2025-05-17T00:48:52.899049Z INFO ExtHandler ExtHandler CPU Arch: aarch64 May 17 00:48:52.912312 waagent[1664]: 2025-05-17T00:48:52.912213Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.7; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; Arch: aarch64; systemd: True; systemd_version: systemd 252 (252); LISDrivers: Absent; logrotate: logrotate 3.20.1; May 17 00:48:52.912808 waagent[1664]: 2025-05-17T00:48:52.912756Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 17 00:48:52.913039 waagent[1664]: 2025-05-17T00:48:52.912993Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 May 17 00:48:52.913362 waagent[1664]: 2025-05-17T00:48:52.913309Z INFO ExtHandler ExtHandler Initializing the goal state... May 17 00:48:52.926684 waagent[1664]: 2025-05-17T00:48:52.926619Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] May 17 00:48:52.934942 waagent[1664]: 2025-05-17T00:48:52.934894Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.164 May 17 00:48:52.936006 waagent[1664]: 2025-05-17T00:48:52.935951Z INFO ExtHandler May 17 00:48:52.936268 waagent[1664]: 2025-05-17T00:48:52.936216Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 1acdbe73-3d41-4dbb-8dc8-23829f9a47fa eTag: 12862012994193266536 source: Fabric] May 17 00:48:52.937088 waagent[1664]: 2025-05-17T00:48:52.937033Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. May 17 00:48:52.938413 waagent[1664]: 2025-05-17T00:48:52.938356Z INFO ExtHandler May 17 00:48:52.938634 waagent[1664]: 2025-05-17T00:48:52.938588Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] May 17 00:48:52.945439 waagent[1664]: 2025-05-17T00:48:52.945395Z INFO ExtHandler ExtHandler Downloading artifacts profile blob May 17 00:48:52.945954 waagent[1664]: 2025-05-17T00:48:52.945910Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required May 17 00:48:52.970473 waagent[1664]: 2025-05-17T00:48:52.970418Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. May 17 00:48:53.039829 waagent[1664]: 2025-05-17T00:48:53.039701Z INFO ExtHandler Downloaded certificate {'thumbprint': '46CA0264ED19F58C65FCBFB74A01C6AE1ED57C3A', 'hasPrivateKey': True} May 17 00:48:53.041038 waagent[1664]: 2025-05-17T00:48:53.040980Z INFO ExtHandler Downloaded certificate {'thumbprint': '3905407BA49F47EFC368E587E46F6102CC1CE3B0', 'hasPrivateKey': False} May 17 00:48:53.042197 waagent[1664]: 2025-05-17T00:48:53.042128Z INFO ExtHandler Fetch goal state from WireServer completed May 17 00:48:53.043148 waagent[1664]: 2025-05-17T00:48:53.043093Z INFO ExtHandler ExtHandler Goal state initialization completed. May 17 00:48:53.063310 waagent[1664]: 2025-05-17T00:48:53.063209Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.0.15 3 Sep 2024 (Library: OpenSSL 3.0.15 3 Sep 2024) May 17 00:48:53.070999 waagent[1664]: 2025-05-17T00:48:53.070905Z INFO ExtHandler ExtHandler Using iptables [version 1.8.8] to manage firewall rules May 17 00:48:53.074467 waagent[1664]: 2025-05-17T00:48:53.074368Z INFO ExtHandler ExtHandler Did not find a legacy firewall rule: ['iptables', '-w', '-t', 'security', '-C', 'OUTPUT', '-d', '168.63.129.16', '-p', 'tcp', '-m', 'conntrack', '--ctstate', 'INVALID,NEW', '-j', 'ACCEPT'] May 17 00:48:53.074764 waagent[1664]: 2025-05-17T00:48:53.074713Z INFO ExtHandler ExtHandler Checking state of the firewall May 17 00:48:53.294883 waagent[1664]: 2025-05-17T00:48:53.294712Z INFO ExtHandler ExtHandler Created firewall rules for Azure Fabric: May 17 00:48:53.294883 waagent[1664]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) May 17 00:48:53.294883 waagent[1664]: pkts bytes target prot opt in out source destination May 17 00:48:53.294883 waagent[1664]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) May 17 00:48:53.294883 waagent[1664]: pkts bytes target prot opt in out source destination May 17 00:48:53.294883 waagent[1664]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) May 17 00:48:53.294883 waagent[1664]: pkts bytes target prot opt in out source destination May 17 00:48:53.294883 waagent[1664]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 May 17 00:48:53.294883 waagent[1664]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 May 17 00:48:53.294883 waagent[1664]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW May 17 00:48:53.296317 waagent[1664]: 2025-05-17T00:48:53.296258Z INFO ExtHandler ExtHandler Setting up persistent firewall rules May 17 00:48:53.299048 waagent[1664]: 2025-05-17T00:48:53.298925Z INFO ExtHandler ExtHandler The firewalld service is not present on the system May 17 00:48:53.299443 waagent[1664]: 2025-05-17T00:48:53.299390Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service May 17 00:48:53.299894 waagent[1664]: 2025-05-17T00:48:53.299841Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup May 17 00:48:53.307436 waagent[1664]: 2025-05-17T00:48:53.307386Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now May 17 00:48:53.307997 waagent[1664]: 2025-05-17T00:48:53.307946Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' May 17 00:48:53.316130 waagent[1664]: 2025-05-17T00:48:53.316073Z INFO ExtHandler ExtHandler WALinuxAgent-2.13.1.1 running as process 1664 May 17 00:48:53.319477 waagent[1664]: 2025-05-17T00:48:53.319420Z INFO ExtHandler ExtHandler [CGI] Cgroups is not currently supported on ['flatcar', '3510.3.7', '', 'Flatcar Container Linux by Kinvolk'] May 17 00:48:53.320368 waagent[1664]: 2025-05-17T00:48:53.320312Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case cgroup usage went from enabled to disabled May 17 00:48:53.321347 waagent[1664]: 2025-05-17T00:48:53.321291Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False May 17 00:48:53.324232 waagent[1664]: 2025-05-17T00:48:53.324151Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] May 17 00:48:53.325632 waagent[1664]: 2025-05-17T00:48:53.325560Z INFO ExtHandler ExtHandler Starting env monitor service. May 17 00:48:53.325859 waagent[1664]: 2025-05-17T00:48:53.325793Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 17 00:48:53.326290 waagent[1664]: 2025-05-17T00:48:53.326219Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 May 17 00:48:53.327149 waagent[1664]: 2025-05-17T00:48:53.327069Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. May 17 00:48:53.327481 waagent[1664]: 2025-05-17T00:48:53.327417Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: May 17 00:48:53.327481 waagent[1664]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT May 17 00:48:53.327481 waagent[1664]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 May 17 00:48:53.327481 waagent[1664]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 May 17 00:48:53.327481 waagent[1664]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 May 17 00:48:53.327481 waagent[1664]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 May 17 00:48:53.327481 waagent[1664]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 May 17 00:48:53.329561 waagent[1664]: 2025-05-17T00:48:53.329447Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. May 17 00:48:53.329948 waagent[1664]: 2025-05-17T00:48:53.329883Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 17 00:48:53.331216 waagent[1664]: 2025-05-17T00:48:53.330618Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 May 17 00:48:53.333340 waagent[1664]: 2025-05-17T00:48:53.333221Z INFO EnvHandler ExtHandler Configure routes May 17 00:48:53.333604 waagent[1664]: 2025-05-17T00:48:53.333546Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread May 17 00:48:53.333789 waagent[1664]: 2025-05-17T00:48:53.333725Z INFO ExtHandler ExtHandler Start Extension Telemetry service. May 17 00:48:53.334483 waagent[1664]: 2025-05-17T00:48:53.334407Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True May 17 00:48:53.334821 waagent[1664]: 2025-05-17T00:48:53.334759Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. May 17 00:48:53.335205 waagent[1664]: 2025-05-17T00:48:53.335090Z INFO EnvHandler ExtHandler Gateway:None May 17 00:48:53.335319 waagent[1664]: 2025-05-17T00:48:53.335259Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread May 17 00:48:53.339288 waagent[1664]: 2025-05-17T00:48:53.339222Z INFO EnvHandler ExtHandler Routes:None May 17 00:48:53.350271 waagent[1664]: 2025-05-17T00:48:53.350189Z INFO MonitorHandler ExtHandler Network interfaces: May 17 00:48:53.350271 waagent[1664]: Executing ['ip', '-a', '-o', 'link']: May 17 00:48:53.350271 waagent[1664]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 May 17 00:48:53.350271 waagent[1664]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:fc:96:de brd ff:ff:ff:ff:ff:ff May 17 00:48:53.350271 waagent[1664]: 3: enP60913s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:fc:96:de brd ff:ff:ff:ff:ff:ff\ altname enP60913p0s2 May 17 00:48:53.350271 waagent[1664]: Executing ['ip', '-4', '-a', '-o', 'address']: May 17 00:48:53.350271 waagent[1664]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever May 17 00:48:53.350271 waagent[1664]: 2: eth0 inet 10.200.20.19/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever May 17 00:48:53.350271 waagent[1664]: Executing ['ip', '-6', '-a', '-o', 'address']: May 17 00:48:53.350271 waagent[1664]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever May 17 00:48:53.350271 waagent[1664]: 2: eth0 inet6 fe80::20d:3aff:fefc:96de/64 scope link \ valid_lft forever preferred_lft forever May 17 00:48:53.356207 waagent[1664]: 2025-05-17T00:48:53.356093Z INFO ExtHandler ExtHandler Downloading agent manifest May 17 00:48:53.411503 waagent[1664]: 2025-05-17T00:48:53.411422Z INFO EnvHandler ExtHandler Using iptables [version 1.8.8] to manage firewall rules May 17 00:48:53.411823 waagent[1664]: 2025-05-17T00:48:53.411749Z INFO ExtHandler ExtHandler May 17 00:48:53.413128 waagent[1664]: 2025-05-17T00:48:53.413056Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: d7447483-9303-41c5-b4f1-3fb10868965b correlation 12a8f578-c5ed-40ad-91d8-daca3aa9dfc5 created: 2025-05-17T00:47:16.296143Z] May 17 00:48:53.416217 waagent[1664]: 2025-05-17T00:48:53.416009Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. May 17 00:48:53.420978 waagent[1664]: 2025-05-17T00:48:53.420792Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 8 ms] May 17 00:48:53.431595 waagent[1664]: 2025-05-17T00:48:53.431532Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 May 17 00:48:53.451625 waagent[1664]: 2025-05-17T00:48:53.451537Z INFO ExtHandler ExtHandler Looking for existing remote access users. May 17 00:48:53.454060 waagent[1664]: 2025-05-17T00:48:53.453995Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.13.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 1FF18D50-8D95-4DEE-9951-B27F3D02F1B9;UpdateGSErrors: 0;AutoUpdate: 1;UpdateMode: SelfUpdate;] May 17 00:48:57.825673 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 17 00:48:57.825841 systemd[1]: Stopped kubelet.service. May 17 00:48:57.827232 systemd[1]: Starting kubelet.service... May 17 00:48:57.912563 systemd[1]: Started kubelet.service. May 17 00:48:58.047055 kubelet[1711]: E0517 00:48:58.047014 1711 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:48:58.049365 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:48:58.049487 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:49:03.210350 systemd[1]: Created slice system-sshd.slice. May 17 00:49:03.211811 systemd[1]: Started sshd@0-10.200.20.19:22-10.200.16.10:58070.service. May 17 00:49:03.922072 sshd[1717]: Accepted publickey for core from 10.200.16.10 port 58070 ssh2: RSA SHA256:kTalk4vvVOHJD+odK+kI4Z4CxTmNI3TSVyFiPn8PnHg May 17 00:49:03.939403 sshd[1717]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:49:03.943137 systemd-logind[1440]: New session 3 of user core. May 17 00:49:03.943576 systemd[1]: Started session-3.scope. May 17 00:49:04.335063 systemd[1]: Started sshd@1-10.200.20.19:22-10.200.16.10:58072.service. May 17 00:49:04.785910 sshd[1722]: Accepted publickey for core from 10.200.16.10 port 58072 ssh2: RSA SHA256:kTalk4vvVOHJD+odK+kI4Z4CxTmNI3TSVyFiPn8PnHg May 17 00:49:04.787959 sshd[1722]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:49:04.791812 systemd-logind[1440]: New session 4 of user core. May 17 00:49:04.792261 systemd[1]: Started session-4.scope. May 17 00:49:05.109436 sshd[1722]: pam_unix(sshd:session): session closed for user core May 17 00:49:05.112208 systemd[1]: sshd@1-10.200.20.19:22-10.200.16.10:58072.service: Deactivated successfully. May 17 00:49:05.112879 systemd[1]: session-4.scope: Deactivated successfully. May 17 00:49:05.113392 systemd-logind[1440]: Session 4 logged out. Waiting for processes to exit. May 17 00:49:05.114032 systemd-logind[1440]: Removed session 4. May 17 00:49:05.184590 systemd[1]: Started sshd@2-10.200.20.19:22-10.200.16.10:58084.service. May 17 00:49:05.644958 sshd[1728]: Accepted publickey for core from 10.200.16.10 port 58084 ssh2: RSA SHA256:kTalk4vvVOHJD+odK+kI4Z4CxTmNI3TSVyFiPn8PnHg May 17 00:49:05.646270 sshd[1728]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:49:05.650400 systemd[1]: Started session-5.scope. May 17 00:49:05.650687 systemd-logind[1440]: New session 5 of user core. May 17 00:49:05.968318 sshd[1728]: pam_unix(sshd:session): session closed for user core May 17 00:49:05.970783 systemd[1]: sshd@2-10.200.20.19:22-10.200.16.10:58084.service: Deactivated successfully. May 17 00:49:05.971465 systemd[1]: session-5.scope: Deactivated successfully. May 17 00:49:05.971969 systemd-logind[1440]: Session 5 logged out. Waiting for processes to exit. May 17 00:49:05.972726 systemd-logind[1440]: Removed session 5. May 17 00:49:06.047132 systemd[1]: Started sshd@3-10.200.20.19:22-10.200.16.10:58092.service. May 17 00:49:06.530981 sshd[1734]: Accepted publickey for core from 10.200.16.10 port 58092 ssh2: RSA SHA256:kTalk4vvVOHJD+odK+kI4Z4CxTmNI3TSVyFiPn8PnHg May 17 00:49:06.532523 sshd[1734]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:49:06.536078 systemd-logind[1440]: New session 6 of user core. May 17 00:49:06.536514 systemd[1]: Started session-6.scope. May 17 00:49:06.894659 sshd[1734]: pam_unix(sshd:session): session closed for user core May 17 00:49:06.896873 systemd[1]: sshd@3-10.200.20.19:22-10.200.16.10:58092.service: Deactivated successfully. May 17 00:49:06.897556 systemd[1]: session-6.scope: Deactivated successfully. May 17 00:49:06.898079 systemd-logind[1440]: Session 6 logged out. Waiting for processes to exit. May 17 00:49:06.898855 systemd-logind[1440]: Removed session 6. May 17 00:49:06.969084 systemd[1]: Started sshd@4-10.200.20.19:22-10.200.16.10:58102.service. May 17 00:49:07.425215 sshd[1740]: Accepted publickey for core from 10.200.16.10 port 58102 ssh2: RSA SHA256:kTalk4vvVOHJD+odK+kI4Z4CxTmNI3TSVyFiPn8PnHg May 17 00:49:07.426725 sshd[1740]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:49:07.430687 systemd[1]: Started session-7.scope. May 17 00:49:07.431211 systemd-logind[1440]: New session 7 of user core. May 17 00:49:08.060840 sudo[1743]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 17 00:49:08.061045 sudo[1743]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 17 00:49:08.062114 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 17 00:49:08.062335 systemd[1]: Stopped kubelet.service. May 17 00:49:08.063636 systemd[1]: Starting kubelet.service... May 17 00:49:08.084855 systemd[1]: Starting docker.service... May 17 00:49:08.129674 env[1755]: time="2025-05-17T00:49:08.129612696Z" level=info msg="Starting up" May 17 00:49:08.139302 env[1755]: time="2025-05-17T00:49:08.139261962Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 17 00:49:08.139302 env[1755]: time="2025-05-17T00:49:08.139287962Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 17 00:49:08.139302 env[1755]: time="2025-05-17T00:49:08.139307521Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 17 00:49:08.139452 env[1755]: time="2025-05-17T00:49:08.139322681Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 17 00:49:08.141337 env[1755]: time="2025-05-17T00:49:08.141301999Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 17 00:49:08.141337 env[1755]: time="2025-05-17T00:49:08.141325039Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 17 00:49:08.141337 env[1755]: time="2025-05-17T00:49:08.141338639Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 17 00:49:08.141448 env[1755]: time="2025-05-17T00:49:08.141347279Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 17 00:49:08.145891 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport736409561-merged.mount: Deactivated successfully. May 17 00:49:08.673997 systemd[1]: Started kubelet.service. May 17 00:49:08.713369 kubelet[1764]: E0517 00:49:08.713332 1764 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:49:08.715132 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:49:08.715274 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:49:08.728643 env[1755]: time="2025-05-17T00:49:08.728400261Z" level=info msg="Loading containers: start." May 17 00:49:08.924192 kernel: Initializing XFRM netlink socket May 17 00:49:08.948656 env[1755]: time="2025-05-17T00:49:08.948623819Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" May 17 00:49:09.109758 systemd-networkd[1623]: docker0: Link UP May 17 00:49:09.133206 env[1755]: time="2025-05-17T00:49:09.133159521Z" level=info msg="Loading containers: done." May 17 00:49:09.142324 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1607219829-merged.mount: Deactivated successfully. May 17 00:49:09.156255 env[1755]: time="2025-05-17T00:49:09.156215250Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 17 00:49:09.156399 env[1755]: time="2025-05-17T00:49:09.156378209Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 May 17 00:49:09.156497 env[1755]: time="2025-05-17T00:49:09.156475209Z" level=info msg="Daemon has completed initialization" May 17 00:49:09.187302 systemd[1]: Started docker.service. May 17 00:49:09.190710 env[1755]: time="2025-05-17T00:49:09.190674282Z" level=info msg="API listen on /run/docker.sock" May 17 00:49:11.014794 env[1452]: time="2025-05-17T00:49:11.014745511Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.1\"" May 17 00:49:11.885748 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3580281168.mount: Deactivated successfully. May 17 00:49:13.486444 env[1452]: time="2025-05-17T00:49:13.486398921Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.33.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:49:13.492906 env[1452]: time="2025-05-17T00:49:13.492876034Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:9a2b7cf4f8540534c6ec5b758462c6d7885c6e734652172078bba899c0e3089a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:49:13.496776 env[1452]: time="2025-05-17T00:49:13.496751270Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.33.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:49:13.502243 env[1452]: time="2025-05-17T00:49:13.502212425Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:d8ae2fb01c39aa1c7add84f3d54425cf081c24c11e3946830292a8cfa4293548,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:49:13.503852 env[1452]: time="2025-05-17T00:49:13.503096304Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.1\" returns image reference \"sha256:9a2b7cf4f8540534c6ec5b758462c6d7885c6e734652172078bba899c0e3089a\"" May 17 00:49:13.505088 env[1452]: time="2025-05-17T00:49:13.505063502Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.1\"" May 17 00:49:15.097797 env[1452]: time="2025-05-17T00:49:15.097722695Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.33.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:49:15.104700 env[1452]: time="2025-05-17T00:49:15.104664208Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:674996a72aa5900cbbbcd410437021fa4c62a7f829a56f58eb23ac430f2ae383,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:49:15.108649 env[1452]: time="2025-05-17T00:49:15.108624644Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.33.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:49:15.115421 env[1452]: time="2025-05-17T00:49:15.115385958Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:7c9bea694e3a3c01ed6a5ee02d55a6124cc08e0b2eec6caa33f2c396b8cbc3f8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:49:15.116910 env[1452]: time="2025-05-17T00:49:15.116877797Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.1\" returns image reference \"sha256:674996a72aa5900cbbbcd410437021fa4c62a7f829a56f58eb23ac430f2ae383\"" May 17 00:49:15.119261 env[1452]: time="2025-05-17T00:49:15.119095195Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.1\"" May 17 00:49:15.861010 kernel: hv_balloon: Max. dynamic memory size: 4096 MB May 17 00:49:16.368044 env[1452]: time="2025-05-17T00:49:16.367994614Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.33.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:49:16.378241 env[1452]: time="2025-05-17T00:49:16.378210446Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:014094c90caacf743dc5fb4281363492da1df31cd8218aeceab3be3326277d2e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:49:16.382861 env[1452]: time="2025-05-17T00:49:16.382832482Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.33.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:49:16.387027 env[1452]: time="2025-05-17T00:49:16.386987598Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:395b7de7cdbdcc3c3a3db270844a3f71d757e2447a1e4db76b4cce46fba7fd55,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:49:16.387944 env[1452]: time="2025-05-17T00:49:16.387906717Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.1\" returns image reference \"sha256:014094c90caacf743dc5fb4281363492da1df31cd8218aeceab3be3326277d2e\"" May 17 00:49:16.389240 env[1452]: time="2025-05-17T00:49:16.389214716Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.1\"" May 17 00:49:17.655348 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3829888161.mount: Deactivated successfully. May 17 00:49:18.711923 env[1452]: time="2025-05-17T00:49:18.711866180Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.33.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:49:18.717268 env[1452]: time="2025-05-17T00:49:18.717230336Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e58848989f556e36aa29d7852ab1712163960651e074d11cae9d31fb27192db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:49:18.721424 env[1452]: time="2025-05-17T00:49:18.721395573Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.33.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:49:18.724445 env[1452]: time="2025-05-17T00:49:18.724414971Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:7ddf379897139ae8ade8b33cb9373b70c632a4d5491da6e234f5d830e0a50807,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:49:18.725048 env[1452]: time="2025-05-17T00:49:18.725014050Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.1\" returns image reference \"sha256:3e58848989f556e36aa29d7852ab1712163960651e074d11cae9d31fb27192db\"" May 17 00:49:18.726055 env[1452]: time="2025-05-17T00:49:18.726027809Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" May 17 00:49:18.825608 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 17 00:49:18.825771 systemd[1]: Stopped kubelet.service. May 17 00:49:18.827148 systemd[1]: Starting kubelet.service... May 17 00:49:19.072874 systemd[1]: Started kubelet.service. May 17 00:49:19.107320 kubelet[1881]: E0517 00:49:19.107266 1881 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:49:19.109443 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:49:19.109562 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:49:20.054721 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2509520425.mount: Deactivated successfully. May 17 00:49:21.313379 env[1452]: time="2025-05-17T00:49:21.313333449Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.12.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:49:21.320590 env[1452]: time="2025-05-17T00:49:21.320551685Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:49:21.324934 env[1452]: time="2025-05-17T00:49:21.324894962Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.12.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:49:21.329677 env[1452]: time="2025-05-17T00:49:21.329641879Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:49:21.330501 env[1452]: time="2025-05-17T00:49:21.330470798Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" May 17 00:49:21.330993 env[1452]: time="2025-05-17T00:49:21.330971758Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 17 00:49:21.895054 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2568908686.mount: Deactivated successfully. May 17 00:49:21.929620 env[1452]: time="2025-05-17T00:49:21.929573940Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:49:21.943599 env[1452]: time="2025-05-17T00:49:21.943571091Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:49:21.948521 env[1452]: time="2025-05-17T00:49:21.948482768Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:49:21.955436 env[1452]: time="2025-05-17T00:49:21.955395924Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:49:21.956184 env[1452]: time="2025-05-17T00:49:21.956147803Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 17 00:49:21.957316 env[1452]: time="2025-05-17T00:49:21.957285203Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" May 17 00:49:22.484202 update_engine[1443]: I0517 00:49:22.484078 1443 update_attempter.cc:509] Updating boot flags... May 17 00:49:25.697774 env[1452]: time="2025-05-17T00:49:25.697727768Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.21-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:49:25.706431 env[1452]: time="2025-05-17T00:49:25.706398164Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:49:25.711374 env[1452]: time="2025-05-17T00:49:25.711326721Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.21-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:49:25.717961 env[1452]: time="2025-05-17T00:49:25.717923838Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:49:25.718369 env[1452]: time="2025-05-17T00:49:25.718339878Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" May 17 00:49:29.325616 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. May 17 00:49:29.325791 systemd[1]: Stopped kubelet.service. May 17 00:49:29.327143 systemd[1]: Starting kubelet.service... May 17 00:49:29.652540 systemd[1]: Started kubelet.service. May 17 00:49:29.694645 kubelet[1980]: E0517 00:49:29.694605 1980 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:49:29.696473 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:49:29.696601 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:49:31.593109 systemd[1]: Stopped kubelet.service. May 17 00:49:31.595268 systemd[1]: Starting kubelet.service... May 17 00:49:31.634324 systemd[1]: Reloading. May 17 00:49:31.714482 /usr/lib/systemd/system-generators/torcx-generator[2012]: time="2025-05-17T00:49:31Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 17 00:49:31.714790 /usr/lib/systemd/system-generators/torcx-generator[2012]: time="2025-05-17T00:49:31Z" level=info msg="torcx already run" May 17 00:49:31.792734 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 00:49:31.792907 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 00:49:31.808249 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:49:31.903862 systemd[1]: Started kubelet.service. May 17 00:49:31.907231 systemd[1]: Stopping kubelet.service... May 17 00:49:31.907982 systemd[1]: kubelet.service: Deactivated successfully. May 17 00:49:31.908291 systemd[1]: Stopped kubelet.service. May 17 00:49:31.910019 systemd[1]: Starting kubelet.service... May 17 00:49:32.097278 systemd[1]: Started kubelet.service. May 17 00:49:32.127892 kubelet[2079]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:49:32.127892 kubelet[2079]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 17 00:49:32.127892 kubelet[2079]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:49:32.128279 kubelet[2079]: I0517 00:49:32.127947 2079 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:49:33.433033 kubelet[2079]: I0517 00:49:33.432985 2079 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" May 17 00:49:33.433033 kubelet[2079]: I0517 00:49:33.433015 2079 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:49:33.433432 kubelet[2079]: I0517 00:49:33.433362 2079 server.go:956] "Client rotation is on, will bootstrap in background" May 17 00:49:33.449679 kubelet[2079]: E0517 00:49:33.449648 2079 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.19:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.19:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" May 17 00:49:33.451612 kubelet[2079]: I0517 00:49:33.451585 2079 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:49:33.458267 kubelet[2079]: E0517 00:49:33.458234 2079 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:49:33.458267 kubelet[2079]: I0517 00:49:33.458269 2079 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:49:33.461110 kubelet[2079]: I0517 00:49:33.461092 2079 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:49:33.462258 kubelet[2079]: I0517 00:49:33.462224 2079 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:49:33.462404 kubelet[2079]: I0517 00:49:33.462262 2079 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.7-n-5e40c0776b","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 17 00:49:33.462496 kubelet[2079]: I0517 00:49:33.462410 2079 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:49:33.462496 kubelet[2079]: I0517 00:49:33.462419 2079 container_manager_linux.go:303] "Creating device plugin manager" May 17 00:49:33.462543 kubelet[2079]: I0517 00:49:33.462527 2079 state_mem.go:36] "Initialized new in-memory state store" May 17 00:49:33.465331 kubelet[2079]: I0517 00:49:33.465312 2079 kubelet.go:480] "Attempting to sync node with API server" May 17 00:49:33.465378 kubelet[2079]: I0517 00:49:33.465334 2079 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:49:33.465378 kubelet[2079]: I0517 00:49:33.465364 2079 kubelet.go:386] "Adding apiserver pod source" May 17 00:49:33.466561 kubelet[2079]: I0517 00:49:33.466545 2079 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:49:33.473598 kubelet[2079]: I0517 00:49:33.473580 2079 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 17 00:49:33.474281 kubelet[2079]: I0517 00:49:33.474262 2079 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" May 17 00:49:33.474422 kubelet[2079]: W0517 00:49:33.474393 2079 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 17 00:49:33.476452 kubelet[2079]: I0517 00:49:33.476437 2079 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 17 00:49:33.476565 kubelet[2079]: I0517 00:49:33.476554 2079 server.go:1289] "Started kubelet" May 17 00:49:33.476767 kubelet[2079]: E0517 00:49:33.476748 2079 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-n-5e40c0776b&limit=500&resourceVersion=0\": dial tcp 10.200.20.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" May 17 00:49:33.480157 kubelet[2079]: E0517 00:49:33.480125 2079 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.19:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" May 17 00:49:33.480253 kubelet[2079]: I0517 00:49:33.480208 2079 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:49:33.480515 kubelet[2079]: I0517 00:49:33.480492 2079 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:49:33.481243 kubelet[2079]: I0517 00:49:33.481182 2079 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:49:33.490572 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). May 17 00:49:33.490743 kubelet[2079]: I0517 00:49:33.490717 2079 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:49:33.493632 kubelet[2079]: I0517 00:49:33.493599 2079 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:49:33.494679 kubelet[2079]: I0517 00:49:33.494646 2079 volume_manager.go:297] "Starting Kubelet Volume Manager" May 17 00:49:33.494857 kubelet[2079]: E0517 00:49:33.494830 2079 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.7-n-5e40c0776b\" not found" May 17 00:49:33.496310 kubelet[2079]: I0517 00:49:33.496283 2079 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 17 00:49:33.496380 kubelet[2079]: I0517 00:49:33.496345 2079 reconciler.go:26] "Reconciler: start to sync state" May 17 00:49:33.497206 kubelet[2079]: E0517 00:49:33.497097 2079 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.19:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" May 17 00:49:33.497273 kubelet[2079]: E0517 00:49:33.497214 2079 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.7-n-5e40c0776b?timeout=10s\": dial tcp 10.200.20.19:6443: connect: connection refused" interval="200ms" May 17 00:49:33.498216 kubelet[2079]: E0517 00:49:33.497294 2079 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.19:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.19:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.7-n-5e40c0776b.18402a20435225fd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.7-n-5e40c0776b,UID:ci-3510.3.7-n-5e40c0776b,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.7-n-5e40c0776b,},FirstTimestamp:2025-05-17 00:49:33.476529661 +0000 UTC m=+1.374567663,LastTimestamp:2025-05-17 00:49:33.476529661 +0000 UTC m=+1.374567663,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.7-n-5e40c0776b,}" May 17 00:49:33.498450 kubelet[2079]: I0517 00:49:33.498417 2079 factory.go:223] Registration of the systemd container factory successfully May 17 00:49:33.498521 kubelet[2079]: I0517 00:49:33.498498 2079 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:49:33.500063 kubelet[2079]: I0517 00:49:33.500036 2079 factory.go:223] Registration of the containerd container factory successfully May 17 00:49:33.501509 kubelet[2079]: I0517 00:49:33.501489 2079 server.go:317] "Adding debug handlers to kubelet server" May 17 00:49:33.521547 kubelet[2079]: E0517 00:49:33.521521 2079 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 00:49:33.568933 kubelet[2079]: I0517 00:49:33.568910 2079 cpu_manager.go:221] "Starting CPU manager" policy="none" May 17 00:49:33.568933 kubelet[2079]: I0517 00:49:33.568927 2079 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 17 00:49:33.569059 kubelet[2079]: I0517 00:49:33.568946 2079 state_mem.go:36] "Initialized new in-memory state store" May 17 00:49:33.573384 kubelet[2079]: I0517 00:49:33.573361 2079 policy_none.go:49] "None policy: Start" May 17 00:49:33.573384 kubelet[2079]: I0517 00:49:33.573388 2079 memory_manager.go:186] "Starting memorymanager" policy="None" May 17 00:49:33.573475 kubelet[2079]: I0517 00:49:33.573398 2079 state_mem.go:35] "Initializing new in-memory state store" May 17 00:49:33.579774 systemd[1]: Created slice kubepods.slice. May 17 00:49:33.583947 systemd[1]: Created slice kubepods-burstable.slice. May 17 00:49:33.586568 systemd[1]: Created slice kubepods-besteffort.slice. May 17 00:49:33.595925 kubelet[2079]: E0517 00:49:33.595875 2079 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.7-n-5e40c0776b\" not found" May 17 00:49:33.596800 kubelet[2079]: E0517 00:49:33.596775 2079 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" May 17 00:49:33.596932 kubelet[2079]: I0517 00:49:33.596913 2079 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:49:33.596975 kubelet[2079]: I0517 00:49:33.596935 2079 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:49:33.598478 kubelet[2079]: I0517 00:49:33.598131 2079 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:49:33.600696 kubelet[2079]: E0517 00:49:33.600537 2079 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 17 00:49:33.600696 kubelet[2079]: E0517 00:49:33.600578 2079 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.7-n-5e40c0776b\" not found" May 17 00:49:33.691084 kubelet[2079]: I0517 00:49:33.690973 2079 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" May 17 00:49:33.693881 kubelet[2079]: I0517 00:49:33.693852 2079 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" May 17 00:49:33.693881 kubelet[2079]: I0517 00:49:33.693882 2079 status_manager.go:230] "Starting to sync pod status with apiserver" May 17 00:49:33.693996 kubelet[2079]: I0517 00:49:33.693903 2079 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 17 00:49:33.693996 kubelet[2079]: I0517 00:49:33.693909 2079 kubelet.go:2436] "Starting kubelet main sync loop" May 17 00:49:33.693996 kubelet[2079]: E0517 00:49:33.693950 2079 kubelet.go:2460] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" May 17 00:49:33.695504 kubelet[2079]: E0517 00:49:33.695481 2079 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.19:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" May 17 00:49:33.697735 kubelet[2079]: E0517 00:49:33.697691 2079 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.7-n-5e40c0776b?timeout=10s\": dial tcp 10.200.20.19:6443: connect: connection refused" interval="400ms" May 17 00:49:33.698264 kubelet[2079]: I0517 00:49:33.698243 2079 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.7-n-5e40c0776b" May 17 00:49:33.698798 kubelet[2079]: E0517 00:49:33.698745 2079 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.19:6443/api/v1/nodes\": dial tcp 10.200.20.19:6443: connect: connection refused" node="ci-3510.3.7-n-5e40c0776b" May 17 00:49:33.800320 kubelet[2079]: I0517 00:49:33.800282 2079 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ea81a4b5cc5a8c4abde80b38e2082727-ca-certs\") pod \"kube-apiserver-ci-3510.3.7-n-5e40c0776b\" (UID: \"ea81a4b5cc5a8c4abde80b38e2082727\") " pod="kube-system/kube-apiserver-ci-3510.3.7-n-5e40c0776b" May 17 00:49:33.800504 kubelet[2079]: I0517 00:49:33.800486 2079 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ea81a4b5cc5a8c4abde80b38e2082727-k8s-certs\") pod \"kube-apiserver-ci-3510.3.7-n-5e40c0776b\" (UID: \"ea81a4b5cc5a8c4abde80b38e2082727\") " pod="kube-system/kube-apiserver-ci-3510.3.7-n-5e40c0776b" May 17 00:49:33.800611 kubelet[2079]: I0517 00:49:33.800592 2079 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ea81a4b5cc5a8c4abde80b38e2082727-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.7-n-5e40c0776b\" (UID: \"ea81a4b5cc5a8c4abde80b38e2082727\") " pod="kube-system/kube-apiserver-ci-3510.3.7-n-5e40c0776b" May 17 00:49:33.806061 systemd[1]: Created slice kubepods-burstable-podea81a4b5cc5a8c4abde80b38e2082727.slice. May 17 00:49:33.810828 kubelet[2079]: E0517 00:49:33.810804 2079 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.7-n-5e40c0776b\" not found" node="ci-3510.3.7-n-5e40c0776b" May 17 00:49:33.814028 systemd[1]: Created slice kubepods-burstable-podcacafcdb6d4227e8b7ec1b0659b14a3f.slice. May 17 00:49:33.815820 kubelet[2079]: E0517 00:49:33.815794 2079 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.7-n-5e40c0776b\" not found" node="ci-3510.3.7-n-5e40c0776b" May 17 00:49:33.832187 systemd[1]: Created slice kubepods-burstable-podb034598f6122cb8a0f4da320c7a74dc2.slice. May 17 00:49:33.833944 kubelet[2079]: E0517 00:49:33.833923 2079 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.7-n-5e40c0776b\" not found" node="ci-3510.3.7-n-5e40c0776b" May 17 00:49:33.900846 kubelet[2079]: I0517 00:49:33.900818 2079 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/cacafcdb6d4227e8b7ec1b0659b14a3f-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.7-n-5e40c0776b\" (UID: \"cacafcdb6d4227e8b7ec1b0659b14a3f\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-5e40c0776b" May 17 00:49:33.900846 kubelet[2079]: I0517 00:49:33.900845 2079 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cacafcdb6d4227e8b7ec1b0659b14a3f-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.7-n-5e40c0776b\" (UID: \"cacafcdb6d4227e8b7ec1b0659b14a3f\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-5e40c0776b" May 17 00:49:33.900972 kubelet[2079]: I0517 00:49:33.900872 2079 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b034598f6122cb8a0f4da320c7a74dc2-kubeconfig\") pod \"kube-scheduler-ci-3510.3.7-n-5e40c0776b\" (UID: \"b034598f6122cb8a0f4da320c7a74dc2\") " pod="kube-system/kube-scheduler-ci-3510.3.7-n-5e40c0776b" May 17 00:49:33.900972 kubelet[2079]: I0517 00:49:33.900902 2079 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cacafcdb6d4227e8b7ec1b0659b14a3f-ca-certs\") pod \"kube-controller-manager-ci-3510.3.7-n-5e40c0776b\" (UID: \"cacafcdb6d4227e8b7ec1b0659b14a3f\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-5e40c0776b" May 17 00:49:33.900972 kubelet[2079]: I0517 00:49:33.900917 2079 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cacafcdb6d4227e8b7ec1b0659b14a3f-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.7-n-5e40c0776b\" (UID: \"cacafcdb6d4227e8b7ec1b0659b14a3f\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-5e40c0776b" May 17 00:49:33.900972 kubelet[2079]: I0517 00:49:33.900934 2079 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cacafcdb6d4227e8b7ec1b0659b14a3f-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.7-n-5e40c0776b\" (UID: \"cacafcdb6d4227e8b7ec1b0659b14a3f\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-5e40c0776b" May 17 00:49:33.901281 kubelet[2079]: I0517 00:49:33.901267 2079 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.7-n-5e40c0776b" May 17 00:49:33.901710 kubelet[2079]: E0517 00:49:33.901688 2079 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.19:6443/api/v1/nodes\": dial tcp 10.200.20.19:6443: connect: connection refused" node="ci-3510.3.7-n-5e40c0776b" May 17 00:49:34.099003 kubelet[2079]: E0517 00:49:34.098969 2079 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.7-n-5e40c0776b?timeout=10s\": dial tcp 10.200.20.19:6443: connect: connection refused" interval="800ms" May 17 00:49:34.112196 env[1452]: time="2025-05-17T00:49:34.111997398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.7-n-5e40c0776b,Uid:ea81a4b5cc5a8c4abde80b38e2082727,Namespace:kube-system,Attempt:0,}" May 17 00:49:34.117480 env[1452]: time="2025-05-17T00:49:34.117434237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.7-n-5e40c0776b,Uid:cacafcdb6d4227e8b7ec1b0659b14a3f,Namespace:kube-system,Attempt:0,}" May 17 00:49:34.135354 env[1452]: time="2025-05-17T00:49:34.135325712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.7-n-5e40c0776b,Uid:b034598f6122cb8a0f4da320c7a74dc2,Namespace:kube-system,Attempt:0,}" May 17 00:49:34.303866 kubelet[2079]: I0517 00:49:34.303831 2079 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.7-n-5e40c0776b" May 17 00:49:34.304246 kubelet[2079]: E0517 00:49:34.304202 2079 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.19:6443/api/v1/nodes\": dial tcp 10.200.20.19:6443: connect: connection refused" node="ci-3510.3.7-n-5e40c0776b" May 17 00:49:34.531783 kubelet[2079]: E0517 00:49:34.531355 2079 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-n-5e40c0776b&limit=500&resourceVersion=0\": dial tcp 10.200.20.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" May 17 00:49:34.880567 kubelet[2079]: E0517 00:49:34.880534 2079 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.19:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" May 17 00:49:34.900284 kubelet[2079]: E0517 00:49:34.900211 2079 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.7-n-5e40c0776b?timeout=10s\": dial tcp 10.200.20.19:6443: connect: connection refused" interval="1.6s" May 17 00:49:34.983504 kubelet[2079]: E0517 00:49:34.983462 2079 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.19:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" May 17 00:49:35.105876 kubelet[2079]: I0517 00:49:35.105848 2079 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.7-n-5e40c0776b" May 17 00:49:35.106388 kubelet[2079]: E0517 00:49:35.106359 2079 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.19:6443/api/v1/nodes\": dial tcp 10.200.20.19:6443: connect: connection refused" node="ci-3510.3.7-n-5e40c0776b" May 17 00:49:35.121061 kubelet[2079]: E0517 00:49:35.121035 2079 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.19:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" May 17 00:49:35.515200 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3515516806.mount: Deactivated successfully. May 17 00:49:35.559612 env[1452]: time="2025-05-17T00:49:35.559559285Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:49:35.563408 env[1452]: time="2025-05-17T00:49:35.563380729Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:49:35.577878 env[1452]: time="2025-05-17T00:49:35.577838598Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:49:35.582355 env[1452]: time="2025-05-17T00:49:35.582330544Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:49:35.587999 env[1452]: time="2025-05-17T00:49:35.587961847Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:49:35.591238 env[1452]: time="2025-05-17T00:49:35.591163551Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:49:35.601206 env[1452]: time="2025-05-17T00:49:35.601160275Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:49:35.605196 kubelet[2079]: E0517 00:49:35.604974 2079 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.19:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.19:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" May 17 00:49:35.610413 env[1452]: time="2025-05-17T00:49:35.610375255Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:49:35.626530 env[1452]: time="2025-05-17T00:49:35.626494418Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:49:35.631815 env[1452]: time="2025-05-17T00:49:35.631789350Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:49:35.638346 env[1452]: time="2025-05-17T00:49:35.638311681Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:49:35.643147 env[1452]: time="2025-05-17T00:49:35.643120358Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:49:35.742096 env[1452]: time="2025-05-17T00:49:35.736681675Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:49:35.742096 env[1452]: time="2025-05-17T00:49:35.736722116Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:49:35.742096 env[1452]: time="2025-05-17T00:49:35.736731876Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:49:35.742096 env[1452]: time="2025-05-17T00:49:35.736900962Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/19d5a6234cacbc721f21bfb75c17e707dddf4da8f710c6b46c0a24971d9c12c2 pid=2122 runtime=io.containerd.runc.v2 May 17 00:49:35.760724 env[1452]: time="2025-05-17T00:49:35.760651173Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:49:35.760724 env[1452]: time="2025-05-17T00:49:35.760694014Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:49:35.760894 env[1452]: time="2025-05-17T00:49:35.760712855Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:49:35.761032 env[1452]: time="2025-05-17T00:49:35.760996544Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0f6893b2f65219df6f8e90babe5032f1d6ccba78f74632d80820c7d7a7fcbfc4 pid=2149 runtime=io.containerd.runc.v2 May 17 00:49:35.763222 env[1452]: time="2025-05-17T00:49:35.763000729Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:49:35.763222 env[1452]: time="2025-05-17T00:49:35.763030330Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:49:35.763222 env[1452]: time="2025-05-17T00:49:35.763041890Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:49:35.765104 env[1452]: time="2025-05-17T00:49:35.764926432Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/14b039f1d06bb5aaa67403fea2993694dfcb66e0e3fde5649d4b69c1b4e9ad18 pid=2147 runtime=io.containerd.runc.v2 May 17 00:49:35.767122 systemd[1]: Started cri-containerd-19d5a6234cacbc721f21bfb75c17e707dddf4da8f710c6b46c0a24971d9c12c2.scope. May 17 00:49:35.789330 systemd[1]: Started cri-containerd-0f6893b2f65219df6f8e90babe5032f1d6ccba78f74632d80820c7d7a7fcbfc4.scope. May 17 00:49:35.810989 systemd[1]: Started cri-containerd-14b039f1d06bb5aaa67403fea2993694dfcb66e0e3fde5649d4b69c1b4e9ad18.scope. May 17 00:49:35.817275 env[1452]: time="2025-05-17T00:49:35.815955768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.7-n-5e40c0776b,Uid:ea81a4b5cc5a8c4abde80b38e2082727,Namespace:kube-system,Attempt:0,} returns sandbox id \"19d5a6234cacbc721f21bfb75c17e707dddf4da8f710c6b46c0a24971d9c12c2\"" May 17 00:49:35.830617 env[1452]: time="2025-05-17T00:49:35.830573003Z" level=info msg="CreateContainer within sandbox \"19d5a6234cacbc721f21bfb75c17e707dddf4da8f710c6b46c0a24971d9c12c2\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 17 00:49:35.850457 env[1452]: time="2025-05-17T00:49:35.850417807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.7-n-5e40c0776b,Uid:cacafcdb6d4227e8b7ec1b0659b14a3f,Namespace:kube-system,Attempt:0,} returns sandbox id \"0f6893b2f65219df6f8e90babe5032f1d6ccba78f74632d80820c7d7a7fcbfc4\"" May 17 00:49:35.857797 env[1452]: time="2025-05-17T00:49:35.857771686Z" level=info msg="CreateContainer within sandbox \"0f6893b2f65219df6f8e90babe5032f1d6ccba78f74632d80820c7d7a7fcbfc4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 17 00:49:35.862294 env[1452]: time="2025-05-17T00:49:35.862244311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.7-n-5e40c0776b,Uid:b034598f6122cb8a0f4da320c7a74dc2,Namespace:kube-system,Attempt:0,} returns sandbox id \"14b039f1d06bb5aaa67403fea2993694dfcb66e0e3fde5649d4b69c1b4e9ad18\"" May 17 00:49:35.865432 env[1452]: time="2025-05-17T00:49:35.865394093Z" level=info msg="CreateContainer within sandbox \"19d5a6234cacbc721f21bfb75c17e707dddf4da8f710c6b46c0a24971d9c12c2\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e73ae60f685092388cc2b52e006457764281f7bf8dd1b521f5179a4244ed343a\"" May 17 00:49:35.866044 env[1452]: time="2025-05-17T00:49:35.866020593Z" level=info msg="StartContainer for \"e73ae60f685092388cc2b52e006457764281f7bf8dd1b521f5179a4244ed343a\"" May 17 00:49:35.869628 env[1452]: time="2025-05-17T00:49:35.869602030Z" level=info msg="CreateContainer within sandbox \"14b039f1d06bb5aaa67403fea2993694dfcb66e0e3fde5649d4b69c1b4e9ad18\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 17 00:49:35.883038 systemd[1]: Started cri-containerd-e73ae60f685092388cc2b52e006457764281f7bf8dd1b521f5179a4244ed343a.scope. May 17 00:49:35.923782 env[1452]: time="2025-05-17T00:49:35.923745787Z" level=info msg="StartContainer for \"e73ae60f685092388cc2b52e006457764281f7bf8dd1b521f5179a4244ed343a\" returns successfully" May 17 00:49:35.929657 env[1452]: time="2025-05-17T00:49:35.929612498Z" level=info msg="CreateContainer within sandbox \"0f6893b2f65219df6f8e90babe5032f1d6ccba78f74632d80820c7d7a7fcbfc4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b0da2f9cfb2e8582d867f8399907d55fe935a975619c3f18c6bb183ed92bbf58\"" May 17 00:49:35.930131 env[1452]: time="2025-05-17T00:49:35.930108474Z" level=info msg="StartContainer for \"b0da2f9cfb2e8582d867f8399907d55fe935a975619c3f18c6bb183ed92bbf58\"" May 17 00:49:35.938438 env[1452]: time="2025-05-17T00:49:35.938407023Z" level=info msg="CreateContainer within sandbox \"14b039f1d06bb5aaa67403fea2993694dfcb66e0e3fde5649d4b69c1b4e9ad18\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c4bd5c2346aa5d45963099fd1bb9e7ac7c2aeee3f389124af13aef25c6e79c11\"" May 17 00:49:35.938956 env[1452]: time="2025-05-17T00:49:35.938932520Z" level=info msg="StartContainer for \"c4bd5c2346aa5d45963099fd1bb9e7ac7c2aeee3f389124af13aef25c6e79c11\"" May 17 00:49:35.945278 systemd[1]: Started cri-containerd-b0da2f9cfb2e8582d867f8399907d55fe935a975619c3f18c6bb183ed92bbf58.scope. May 17 00:49:35.964148 systemd[1]: Started cri-containerd-c4bd5c2346aa5d45963099fd1bb9e7ac7c2aeee3f389124af13aef25c6e79c11.scope. May 17 00:49:36.002362 env[1452]: time="2025-05-17T00:49:36.002325577Z" level=info msg="StartContainer for \"c4bd5c2346aa5d45963099fd1bb9e7ac7c2aeee3f389124af13aef25c6e79c11\" returns successfully" May 17 00:49:36.014394 env[1452]: time="2025-05-17T00:49:36.014349317Z" level=info msg="StartContainer for \"b0da2f9cfb2e8582d867f8399907d55fe935a975619c3f18c6bb183ed92bbf58\" returns successfully" May 17 00:49:36.701304 kubelet[2079]: E0517 00:49:36.701267 2079 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.7-n-5e40c0776b\" not found" node="ci-3510.3.7-n-5e40c0776b" May 17 00:49:36.703328 kubelet[2079]: E0517 00:49:36.703299 2079 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.7-n-5e40c0776b\" not found" node="ci-3510.3.7-n-5e40c0776b" May 17 00:49:36.705054 kubelet[2079]: E0517 00:49:36.705025 2079 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.7-n-5e40c0776b\" not found" node="ci-3510.3.7-n-5e40c0776b" May 17 00:49:36.708152 kubelet[2079]: I0517 00:49:36.708125 2079 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.7-n-5e40c0776b" May 17 00:49:37.707352 kubelet[2079]: E0517 00:49:37.707318 2079 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.7-n-5e40c0776b\" not found" node="ci-3510.3.7-n-5e40c0776b" May 17 00:49:37.707719 kubelet[2079]: E0517 00:49:37.707670 2079 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.7-n-5e40c0776b\" not found" node="ci-3510.3.7-n-5e40c0776b" May 17 00:49:37.708391 kubelet[2079]: E0517 00:49:37.708368 2079 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.7-n-5e40c0776b\" not found" node="ci-3510.3.7-n-5e40c0776b" May 17 00:49:38.116810 kubelet[2079]: E0517 00:49:38.116773 2079 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.7-n-5e40c0776b\" not found" node="ci-3510.3.7-n-5e40c0776b" May 17 00:49:38.286595 kubelet[2079]: I0517 00:49:38.286556 2079 kubelet_node_status.go:78] "Successfully registered node" node="ci-3510.3.7-n-5e40c0776b" May 17 00:49:38.297038 kubelet[2079]: I0517 00:49:38.297005 2079 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.7-n-5e40c0776b" May 17 00:49:38.316588 kubelet[2079]: E0517 00:49:38.316545 2079 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-3510.3.7-n-5e40c0776b\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-3510.3.7-n-5e40c0776b" May 17 00:49:38.316758 kubelet[2079]: I0517 00:49:38.316744 2079 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510.3.7-n-5e40c0776b" May 17 00:49:38.322933 kubelet[2079]: E0517 00:49:38.322904 2079 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-3510.3.7-n-5e40c0776b\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-3510.3.7-n-5e40c0776b" May 17 00:49:38.323079 kubelet[2079]: I0517 00:49:38.323066 2079 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.7-n-5e40c0776b" May 17 00:49:38.330563 kubelet[2079]: E0517 00:49:38.330535 2079 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-3510.3.7-n-5e40c0776b\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-3510.3.7-n-5e40c0776b" May 17 00:49:38.480104 kubelet[2079]: I0517 00:49:38.480003 2079 apiserver.go:52] "Watching apiserver" May 17 00:49:38.497260 kubelet[2079]: I0517 00:49:38.497218 2079 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 17 00:49:40.099303 kubelet[2079]: I0517 00:49:40.099269 2079 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.7-n-5e40c0776b" May 17 00:49:40.108270 kubelet[2079]: I0517 00:49:40.108234 2079 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" May 17 00:49:40.169192 systemd[1]: Reloading. May 17 00:49:40.266308 /usr/lib/systemd/system-generators/torcx-generator[2384]: time="2025-05-17T00:49:40Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 17 00:49:40.266637 /usr/lib/systemd/system-generators/torcx-generator[2384]: time="2025-05-17T00:49:40Z" level=info msg="torcx already run" May 17 00:49:40.338823 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 00:49:40.338984 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 00:49:40.354643 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:49:40.476306 kubelet[2079]: I0517 00:49:40.476235 2079 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:49:40.476582 systemd[1]: Stopping kubelet.service... May 17 00:49:40.500654 systemd[1]: kubelet.service: Deactivated successfully. May 17 00:49:40.500845 systemd[1]: Stopped kubelet.service. May 17 00:49:40.500893 systemd[1]: kubelet.service: Consumed 1.707s CPU time. May 17 00:49:40.502509 systemd[1]: Starting kubelet.service... May 17 00:49:40.591991 systemd[1]: Started kubelet.service. May 17 00:49:40.635690 kubelet[2445]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:49:40.635976 kubelet[2445]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 17 00:49:40.636029 kubelet[2445]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:49:40.636152 kubelet[2445]: I0517 00:49:40.636122 2445 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:49:40.643869 kubelet[2445]: I0517 00:49:40.643842 2445 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" May 17 00:49:40.643992 kubelet[2445]: I0517 00:49:40.643981 2445 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:49:40.644266 kubelet[2445]: I0517 00:49:40.644250 2445 server.go:956] "Client rotation is on, will bootstrap in background" May 17 00:49:40.645515 kubelet[2445]: I0517 00:49:40.645497 2445 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" May 17 00:49:40.648234 kubelet[2445]: I0517 00:49:40.648203 2445 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:49:40.651086 kubelet[2445]: E0517 00:49:40.651062 2445 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:49:40.651279 kubelet[2445]: I0517 00:49:40.651267 2445 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:49:40.654310 kubelet[2445]: I0517 00:49:40.654294 2445 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:49:40.654614 kubelet[2445]: I0517 00:49:40.654591 2445 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:49:40.654806 kubelet[2445]: I0517 00:49:40.654676 2445 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.7-n-5e40c0776b","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 17 00:49:40.654929 kubelet[2445]: I0517 00:49:40.654916 2445 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:49:40.654990 kubelet[2445]: I0517 00:49:40.654982 2445 container_manager_linux.go:303] "Creating device plugin manager" May 17 00:49:40.655091 kubelet[2445]: I0517 00:49:40.655081 2445 state_mem.go:36] "Initialized new in-memory state store" May 17 00:49:40.655297 kubelet[2445]: I0517 00:49:40.655285 2445 kubelet.go:480] "Attempting to sync node with API server" May 17 00:49:40.655382 kubelet[2445]: I0517 00:49:40.655372 2445 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:49:40.655523 kubelet[2445]: I0517 00:49:40.655510 2445 kubelet.go:386] "Adding apiserver pod source" May 17 00:49:40.655595 kubelet[2445]: I0517 00:49:40.655585 2445 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:49:40.659955 kubelet[2445]: I0517 00:49:40.659939 2445 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 17 00:49:40.660768 kubelet[2445]: I0517 00:49:40.660640 2445 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" May 17 00:49:40.673980 kubelet[2445]: I0517 00:49:40.673956 2445 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 17 00:49:40.674054 kubelet[2445]: I0517 00:49:40.673995 2445 server.go:1289] "Started kubelet" May 17 00:49:40.678905 kubelet[2445]: I0517 00:49:40.678882 2445 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:49:40.684813 kubelet[2445]: I0517 00:49:40.684796 2445 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:49:40.685323 kubelet[2445]: I0517 00:49:40.685258 2445 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:49:40.686466 kubelet[2445]: I0517 00:49:40.686448 2445 server.go:317] "Adding debug handlers to kubelet server" May 17 00:49:40.689095 kubelet[2445]: I0517 00:49:40.689054 2445 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:49:40.689391 kubelet[2445]: I0517 00:49:40.689376 2445 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:49:40.695877 kubelet[2445]: E0517 00:49:40.695851 2445 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 00:49:40.697427 kubelet[2445]: I0517 00:49:40.697410 2445 volume_manager.go:297] "Starting Kubelet Volume Manager" May 17 00:49:40.697638 kubelet[2445]: I0517 00:49:40.697626 2445 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 17 00:49:40.697804 kubelet[2445]: I0517 00:49:40.697794 2445 reconciler.go:26] "Reconciler: start to sync state" May 17 00:49:40.698477 kubelet[2445]: I0517 00:49:40.698459 2445 factory.go:223] Registration of the systemd container factory successfully May 17 00:49:40.698658 kubelet[2445]: I0517 00:49:40.698638 2445 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:49:40.699999 kubelet[2445]: I0517 00:49:40.699981 2445 factory.go:223] Registration of the containerd container factory successfully May 17 00:49:40.709057 kubelet[2445]: I0517 00:49:40.709023 2445 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" May 17 00:49:40.709832 kubelet[2445]: I0517 00:49:40.709810 2445 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" May 17 00:49:40.709879 kubelet[2445]: I0517 00:49:40.709835 2445 status_manager.go:230] "Starting to sync pod status with apiserver" May 17 00:49:40.709879 kubelet[2445]: I0517 00:49:40.709854 2445 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 17 00:49:40.709879 kubelet[2445]: I0517 00:49:40.709862 2445 kubelet.go:2436] "Starting kubelet main sync loop" May 17 00:49:40.709944 kubelet[2445]: E0517 00:49:40.709897 2445 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 00:49:40.746956 kubelet[2445]: I0517 00:49:40.746926 2445 cpu_manager.go:221] "Starting CPU manager" policy="none" May 17 00:49:40.746956 kubelet[2445]: I0517 00:49:40.746948 2445 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 17 00:49:40.747107 kubelet[2445]: I0517 00:49:40.746970 2445 state_mem.go:36] "Initialized new in-memory state store" May 17 00:49:40.747107 kubelet[2445]: I0517 00:49:40.747084 2445 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 17 00:49:40.747107 kubelet[2445]: I0517 00:49:40.747094 2445 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 17 00:49:40.747187 kubelet[2445]: I0517 00:49:40.747110 2445 policy_none.go:49] "None policy: Start" May 17 00:49:40.747187 kubelet[2445]: I0517 00:49:40.747119 2445 memory_manager.go:186] "Starting memorymanager" policy="None" May 17 00:49:40.747187 kubelet[2445]: I0517 00:49:40.747127 2445 state_mem.go:35] "Initializing new in-memory state store" May 17 00:49:40.747312 kubelet[2445]: I0517 00:49:40.747233 2445 state_mem.go:75] "Updated machine memory state" May 17 00:49:40.750418 kubelet[2445]: E0517 00:49:40.750392 2445 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" May 17 00:49:40.750587 kubelet[2445]: I0517 00:49:40.750567 2445 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:49:40.750623 kubelet[2445]: I0517 00:49:40.750587 2445 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:49:40.751151 kubelet[2445]: I0517 00:49:40.751128 2445 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:49:40.755523 kubelet[2445]: E0517 00:49:40.755497 2445 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 17 00:49:40.810866 kubelet[2445]: I0517 00:49:40.810829 2445 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.7-n-5e40c0776b" May 17 00:49:40.811013 kubelet[2445]: I0517 00:49:40.810991 2445 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510.3.7-n-5e40c0776b" May 17 00:49:40.813193 kubelet[2445]: I0517 00:49:40.811317 2445 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.7-n-5e40c0776b" May 17 00:49:40.824077 kubelet[2445]: I0517 00:49:40.824043 2445 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" May 17 00:49:40.829134 kubelet[2445]: I0517 00:49:40.829110 2445 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" May 17 00:49:40.829628 kubelet[2445]: I0517 00:49:40.829613 2445 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" May 17 00:49:40.829822 kubelet[2445]: E0517 00:49:40.829795 2445 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-3510.3.7-n-5e40c0776b\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.7-n-5e40c0776b" May 17 00:49:40.857139 kubelet[2445]: I0517 00:49:40.857113 2445 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.7-n-5e40c0776b" May 17 00:49:40.872474 kubelet[2445]: I0517 00:49:40.872131 2445 kubelet_node_status.go:124] "Node was previously registered" node="ci-3510.3.7-n-5e40c0776b" May 17 00:49:40.872474 kubelet[2445]: I0517 00:49:40.872227 2445 kubelet_node_status.go:78] "Successfully registered node" node="ci-3510.3.7-n-5e40c0776b" May 17 00:49:40.899950 kubelet[2445]: I0517 00:49:40.899087 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/cacafcdb6d4227e8b7ec1b0659b14a3f-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.7-n-5e40c0776b\" (UID: \"cacafcdb6d4227e8b7ec1b0659b14a3f\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-5e40c0776b" May 17 00:49:40.899950 kubelet[2445]: I0517 00:49:40.899711 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cacafcdb6d4227e8b7ec1b0659b14a3f-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.7-n-5e40c0776b\" (UID: \"cacafcdb6d4227e8b7ec1b0659b14a3f\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-5e40c0776b" May 17 00:49:40.899950 kubelet[2445]: I0517 00:49:40.899739 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cacafcdb6d4227e8b7ec1b0659b14a3f-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.7-n-5e40c0776b\" (UID: \"cacafcdb6d4227e8b7ec1b0659b14a3f\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-5e40c0776b" May 17 00:49:40.899950 kubelet[2445]: I0517 00:49:40.899759 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b034598f6122cb8a0f4da320c7a74dc2-kubeconfig\") pod \"kube-scheduler-ci-3510.3.7-n-5e40c0776b\" (UID: \"b034598f6122cb8a0f4da320c7a74dc2\") " pod="kube-system/kube-scheduler-ci-3510.3.7-n-5e40c0776b" May 17 00:49:40.899950 kubelet[2445]: I0517 00:49:40.899773 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ea81a4b5cc5a8c4abde80b38e2082727-ca-certs\") pod \"kube-apiserver-ci-3510.3.7-n-5e40c0776b\" (UID: \"ea81a4b5cc5a8c4abde80b38e2082727\") " pod="kube-system/kube-apiserver-ci-3510.3.7-n-5e40c0776b" May 17 00:49:40.900134 kubelet[2445]: I0517 00:49:40.899791 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ea81a4b5cc5a8c4abde80b38e2082727-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.7-n-5e40c0776b\" (UID: \"ea81a4b5cc5a8c4abde80b38e2082727\") " pod="kube-system/kube-apiserver-ci-3510.3.7-n-5e40c0776b" May 17 00:49:40.900134 kubelet[2445]: I0517 00:49:40.899805 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cacafcdb6d4227e8b7ec1b0659b14a3f-ca-certs\") pod \"kube-controller-manager-ci-3510.3.7-n-5e40c0776b\" (UID: \"cacafcdb6d4227e8b7ec1b0659b14a3f\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-5e40c0776b" May 17 00:49:40.900134 kubelet[2445]: I0517 00:49:40.899818 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cacafcdb6d4227e8b7ec1b0659b14a3f-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.7-n-5e40c0776b\" (UID: \"cacafcdb6d4227e8b7ec1b0659b14a3f\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-5e40c0776b" May 17 00:49:40.900134 kubelet[2445]: I0517 00:49:40.899833 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ea81a4b5cc5a8c4abde80b38e2082727-k8s-certs\") pod \"kube-apiserver-ci-3510.3.7-n-5e40c0776b\" (UID: \"ea81a4b5cc5a8c4abde80b38e2082727\") " pod="kube-system/kube-apiserver-ci-3510.3.7-n-5e40c0776b" May 17 00:49:41.220588 sudo[2482]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 17 00:49:41.220873 sudo[2482]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) May 17 00:49:41.660391 kubelet[2445]: I0517 00:49:41.660360 2445 apiserver.go:52] "Watching apiserver" May 17 00:49:41.669776 sudo[2482]: pam_unix(sudo:session): session closed for user root May 17 00:49:41.698614 kubelet[2445]: I0517 00:49:41.698570 2445 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 17 00:49:41.731549 kubelet[2445]: I0517 00:49:41.731523 2445 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.7-n-5e40c0776b" May 17 00:49:41.732004 kubelet[2445]: I0517 00:49:41.731990 2445 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.7-n-5e40c0776b" May 17 00:49:41.745950 kubelet[2445]: I0517 00:49:41.745922 2445 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" May 17 00:49:41.746041 kubelet[2445]: E0517 00:49:41.745979 2445 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-3510.3.7-n-5e40c0776b\" already exists" pod="kube-system/kube-scheduler-ci-3510.3.7-n-5e40c0776b" May 17 00:49:41.748712 kubelet[2445]: I0517 00:49:41.748685 2445 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" May 17 00:49:41.748852 kubelet[2445]: E0517 00:49:41.748838 2445 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-3510.3.7-n-5e40c0776b\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.7-n-5e40c0776b" May 17 00:49:41.776487 kubelet[2445]: I0517 00:49:41.776428 2445 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.7-n-5e40c0776b" podStartSLOduration=1.776405804 podStartE2EDuration="1.776405804s" podCreationTimestamp="2025-05-17 00:49:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:49:41.758099821 +0000 UTC m=+1.157853141" watchObservedRunningTime="2025-05-17 00:49:41.776405804 +0000 UTC m=+1.176159164" May 17 00:49:41.790539 kubelet[2445]: I0517 00:49:41.790483 2445 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.7-n-5e40c0776b" podStartSLOduration=1.7904685900000001 podStartE2EDuration="1.79046859s" podCreationTimestamp="2025-05-17 00:49:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:49:41.778229654 +0000 UTC m=+1.177983014" watchObservedRunningTime="2025-05-17 00:49:41.79046859 +0000 UTC m=+1.190221950" May 17 00:49:41.804481 kubelet[2445]: I0517 00:49:41.804423 2445 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.7-n-5e40c0776b" podStartSLOduration=1.803682673 podStartE2EDuration="1.803682673s" podCreationTimestamp="2025-05-17 00:49:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:49:41.790956003 +0000 UTC m=+1.190709323" watchObservedRunningTime="2025-05-17 00:49:41.803682673 +0000 UTC m=+1.203436033" May 17 00:49:43.556094 sudo[1743]: pam_unix(sudo:session): session closed for user root May 17 00:49:43.644509 sshd[1740]: pam_unix(sshd:session): session closed for user core May 17 00:49:43.647958 systemd[1]: sshd@4-10.200.20.19:22-10.200.16.10:58102.service: Deactivated successfully. May 17 00:49:43.648682 systemd[1]: session-7.scope: Deactivated successfully. May 17 00:49:43.648833 systemd[1]: session-7.scope: Consumed 7.637s CPU time. May 17 00:49:43.649632 systemd-logind[1440]: Session 7 logged out. Waiting for processes to exit. May 17 00:49:43.650436 systemd-logind[1440]: Removed session 7. May 17 00:49:47.332328 kubelet[2445]: I0517 00:49:47.332291 2445 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 17 00:49:47.333083 env[1452]: time="2025-05-17T00:49:47.332990837Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 17 00:49:47.333336 kubelet[2445]: I0517 00:49:47.333150 2445 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 17 00:49:48.508963 systemd[1]: Created slice kubepods-besteffort-podae389757_4162_4c8f_bb1e_2ded929db8ca.slice. May 17 00:49:48.525758 systemd[1]: Created slice kubepods-burstable-poda6d17eff_e44c_499f_9d1f_5bccae4b1278.slice. May 17 00:49:48.539563 kubelet[2445]: I0517 00:49:48.539514 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a6d17eff-e44c-499f-9d1f-5bccae4b1278-lib-modules\") pod \"cilium-ql9dt\" (UID: \"a6d17eff-e44c-499f-9d1f-5bccae4b1278\") " pod="kube-system/cilium-ql9dt" May 17 00:49:48.539563 kubelet[2445]: I0517 00:49:48.539555 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a6d17eff-e44c-499f-9d1f-5bccae4b1278-cilium-run\") pod \"cilium-ql9dt\" (UID: \"a6d17eff-e44c-499f-9d1f-5bccae4b1278\") " pod="kube-system/cilium-ql9dt" May 17 00:49:48.539563 kubelet[2445]: I0517 00:49:48.539573 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a6d17eff-e44c-499f-9d1f-5bccae4b1278-xtables-lock\") pod \"cilium-ql9dt\" (UID: \"a6d17eff-e44c-499f-9d1f-5bccae4b1278\") " pod="kube-system/cilium-ql9dt" May 17 00:49:48.539956 kubelet[2445]: I0517 00:49:48.539587 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a6d17eff-e44c-499f-9d1f-5bccae4b1278-clustermesh-secrets\") pod \"cilium-ql9dt\" (UID: \"a6d17eff-e44c-499f-9d1f-5bccae4b1278\") " pod="kube-system/cilium-ql9dt" May 17 00:49:48.539956 kubelet[2445]: I0517 00:49:48.539602 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fl8rl\" (UniqueName: \"kubernetes.io/projected/a6d17eff-e44c-499f-9d1f-5bccae4b1278-kube-api-access-fl8rl\") pod \"cilium-ql9dt\" (UID: \"a6d17eff-e44c-499f-9d1f-5bccae4b1278\") " pod="kube-system/cilium-ql9dt" May 17 00:49:48.539956 kubelet[2445]: I0517 00:49:48.539623 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a6d17eff-e44c-499f-9d1f-5bccae4b1278-host-proc-sys-net\") pod \"cilium-ql9dt\" (UID: \"a6d17eff-e44c-499f-9d1f-5bccae4b1278\") " pod="kube-system/cilium-ql9dt" May 17 00:49:48.539956 kubelet[2445]: I0517 00:49:48.539636 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a6d17eff-e44c-499f-9d1f-5bccae4b1278-host-proc-sys-kernel\") pod \"cilium-ql9dt\" (UID: \"a6d17eff-e44c-499f-9d1f-5bccae4b1278\") " pod="kube-system/cilium-ql9dt" May 17 00:49:48.539956 kubelet[2445]: I0517 00:49:48.539650 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a6d17eff-e44c-499f-9d1f-5bccae4b1278-hubble-tls\") pod \"cilium-ql9dt\" (UID: \"a6d17eff-e44c-499f-9d1f-5bccae4b1278\") " pod="kube-system/cilium-ql9dt" May 17 00:49:48.540077 kubelet[2445]: I0517 00:49:48.539698 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2g6rt\" (UniqueName: \"kubernetes.io/projected/ae389757-4162-4c8f-bb1e-2ded929db8ca-kube-api-access-2g6rt\") pod \"kube-proxy-8blqk\" (UID: \"ae389757-4162-4c8f-bb1e-2ded929db8ca\") " pod="kube-system/kube-proxy-8blqk" May 17 00:49:48.540077 kubelet[2445]: I0517 00:49:48.539754 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a6d17eff-e44c-499f-9d1f-5bccae4b1278-bpf-maps\") pod \"cilium-ql9dt\" (UID: \"a6d17eff-e44c-499f-9d1f-5bccae4b1278\") " pod="kube-system/cilium-ql9dt" May 17 00:49:48.540077 kubelet[2445]: I0517 00:49:48.539769 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a6d17eff-e44c-499f-9d1f-5bccae4b1278-cni-path\") pod \"cilium-ql9dt\" (UID: \"a6d17eff-e44c-499f-9d1f-5bccae4b1278\") " pod="kube-system/cilium-ql9dt" May 17 00:49:48.540077 kubelet[2445]: I0517 00:49:48.539785 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a6d17eff-e44c-499f-9d1f-5bccae4b1278-etc-cni-netd\") pod \"cilium-ql9dt\" (UID: \"a6d17eff-e44c-499f-9d1f-5bccae4b1278\") " pod="kube-system/cilium-ql9dt" May 17 00:49:48.540077 kubelet[2445]: I0517 00:49:48.539802 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a6d17eff-e44c-499f-9d1f-5bccae4b1278-cilium-config-path\") pod \"cilium-ql9dt\" (UID: \"a6d17eff-e44c-499f-9d1f-5bccae4b1278\") " pod="kube-system/cilium-ql9dt" May 17 00:49:48.540077 kubelet[2445]: I0517 00:49:48.539817 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ae389757-4162-4c8f-bb1e-2ded929db8ca-kube-proxy\") pod \"kube-proxy-8blqk\" (UID: \"ae389757-4162-4c8f-bb1e-2ded929db8ca\") " pod="kube-system/kube-proxy-8blqk" May 17 00:49:48.540237 kubelet[2445]: I0517 00:49:48.539831 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ae389757-4162-4c8f-bb1e-2ded929db8ca-xtables-lock\") pod \"kube-proxy-8blqk\" (UID: \"ae389757-4162-4c8f-bb1e-2ded929db8ca\") " pod="kube-system/kube-proxy-8blqk" May 17 00:49:48.540237 kubelet[2445]: I0517 00:49:48.539844 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ae389757-4162-4c8f-bb1e-2ded929db8ca-lib-modules\") pod \"kube-proxy-8blqk\" (UID: \"ae389757-4162-4c8f-bb1e-2ded929db8ca\") " pod="kube-system/kube-proxy-8blqk" May 17 00:49:48.540237 kubelet[2445]: I0517 00:49:48.539859 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a6d17eff-e44c-499f-9d1f-5bccae4b1278-hostproc\") pod \"cilium-ql9dt\" (UID: \"a6d17eff-e44c-499f-9d1f-5bccae4b1278\") " pod="kube-system/cilium-ql9dt" May 17 00:49:48.540237 kubelet[2445]: I0517 00:49:48.539886 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a6d17eff-e44c-499f-9d1f-5bccae4b1278-cilium-cgroup\") pod \"cilium-ql9dt\" (UID: \"a6d17eff-e44c-499f-9d1f-5bccae4b1278\") " pod="kube-system/cilium-ql9dt" May 17 00:49:48.626503 systemd[1]: Created slice kubepods-besteffort-pod176de6be_8d2d_455b_ae9b_e09bae723cb5.slice. May 17 00:49:48.641504 kubelet[2445]: I0517 00:49:48.641467 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/176de6be-8d2d-455b-ae9b-e09bae723cb5-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-p6pb2\" (UID: \"176de6be-8d2d-455b-ae9b-e09bae723cb5\") " pod="kube-system/cilium-operator-6c4d7847fc-p6pb2" May 17 00:49:48.645313 kubelet[2445]: I0517 00:49:48.645288 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grxth\" (UniqueName: \"kubernetes.io/projected/176de6be-8d2d-455b-ae9b-e09bae723cb5-kube-api-access-grxth\") pod \"cilium-operator-6c4d7847fc-p6pb2\" (UID: \"176de6be-8d2d-455b-ae9b-e09bae723cb5\") " pod="kube-system/cilium-operator-6c4d7847fc-p6pb2" May 17 00:49:48.646545 kubelet[2445]: I0517 00:49:48.645817 2445 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" May 17 00:49:48.816602 env[1452]: time="2025-05-17T00:49:48.816114843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8blqk,Uid:ae389757-4162-4c8f-bb1e-2ded929db8ca,Namespace:kube-system,Attempt:0,}" May 17 00:49:48.829483 env[1452]: time="2025-05-17T00:49:48.829217300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ql9dt,Uid:a6d17eff-e44c-499f-9d1f-5bccae4b1278,Namespace:kube-system,Attempt:0,}" May 17 00:49:48.892083 env[1452]: time="2025-05-17T00:49:48.887950074Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:49:48.892083 env[1452]: time="2025-05-17T00:49:48.887998635Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:49:48.892083 env[1452]: time="2025-05-17T00:49:48.888008475Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:49:48.892083 env[1452]: time="2025-05-17T00:49:48.888139358Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a6ba37a9c0c34b51185d17ae8865e58d8448c9897d7732724319723a3c71118a pid=2531 runtime=io.containerd.runc.v2 May 17 00:49:48.892527 env[1452]: time="2025-05-17T00:49:48.892476656Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:49:48.892583 env[1452]: time="2025-05-17T00:49:48.892542498Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:49:48.892583 env[1452]: time="2025-05-17T00:49:48.892567779Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:49:48.892885 env[1452]: time="2025-05-17T00:49:48.892839585Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/83f1072bf4361dbdd067fa8d5f84d2af555a5b86fc2b41505826d62bc19815d1 pid=2548 runtime=io.containerd.runc.v2 May 17 00:49:48.905644 systemd[1]: Started cri-containerd-83f1072bf4361dbdd067fa8d5f84d2af555a5b86fc2b41505826d62bc19815d1.scope. May 17 00:49:48.913161 systemd[1]: Started cri-containerd-a6ba37a9c0c34b51185d17ae8865e58d8448c9897d7732724319723a3c71118a.scope. May 17 00:49:48.929227 env[1452]: time="2025-05-17T00:49:48.929093168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-p6pb2,Uid:176de6be-8d2d-455b-ae9b-e09bae723cb5,Namespace:kube-system,Attempt:0,}" May 17 00:49:48.940247 env[1452]: time="2025-05-17T00:49:48.940164259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ql9dt,Uid:a6d17eff-e44c-499f-9d1f-5bccae4b1278,Namespace:kube-system,Attempt:0,} returns sandbox id \"83f1072bf4361dbdd067fa8d5f84d2af555a5b86fc2b41505826d62bc19815d1\"" May 17 00:49:48.944380 env[1452]: time="2025-05-17T00:49:48.944350314Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 17 00:49:48.951106 env[1452]: time="2025-05-17T00:49:48.951070667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8blqk,Uid:ae389757-4162-4c8f-bb1e-2ded929db8ca,Namespace:kube-system,Attempt:0,} returns sandbox id \"a6ba37a9c0c34b51185d17ae8865e58d8448c9897d7732724319723a3c71118a\"" May 17 00:49:48.962420 env[1452]: time="2025-05-17T00:49:48.962350203Z" level=info msg="CreateContainer within sandbox \"a6ba37a9c0c34b51185d17ae8865e58d8448c9897d7732724319723a3c71118a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 17 00:49:48.990162 env[1452]: time="2025-05-17T00:49:48.986733517Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:49:48.990162 env[1452]: time="2025-05-17T00:49:48.986781958Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:49:48.990162 env[1452]: time="2025-05-17T00:49:48.986798238Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:49:48.990162 env[1452]: time="2025-05-17T00:49:48.986981242Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fe857e360a586dc0e00ecb3734493f58f5e3b04a939d0e96bad6180bdfaadf0a pid=2614 runtime=io.containerd.runc.v2 May 17 00:49:49.000886 systemd[1]: Started cri-containerd-fe857e360a586dc0e00ecb3734493f58f5e3b04a939d0e96bad6180bdfaadf0a.scope. May 17 00:49:49.014046 env[1452]: time="2025-05-17T00:49:49.014007208Z" level=info msg="CreateContainer within sandbox \"a6ba37a9c0c34b51185d17ae8865e58d8448c9897d7732724319723a3c71118a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0c4bb5f90b425cdd70bd5d111e99e7c9fa2b702b548ca5bc1be30c3287d30d33\"" May 17 00:49:49.014858 env[1452]: time="2025-05-17T00:49:49.014834946Z" level=info msg="StartContainer for \"0c4bb5f90b425cdd70bd5d111e99e7c9fa2b702b548ca5bc1be30c3287d30d33\"" May 17 00:49:49.030688 systemd[1]: Started cri-containerd-0c4bb5f90b425cdd70bd5d111e99e7c9fa2b702b548ca5bc1be30c3287d30d33.scope. May 17 00:49:49.051224 env[1452]: time="2025-05-17T00:49:49.050444094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-p6pb2,Uid:176de6be-8d2d-455b-ae9b-e09bae723cb5,Namespace:kube-system,Attempt:0,} returns sandbox id \"fe857e360a586dc0e00ecb3734493f58f5e3b04a939d0e96bad6180bdfaadf0a\"" May 17 00:49:49.069972 env[1452]: time="2025-05-17T00:49:49.069856523Z" level=info msg="StartContainer for \"0c4bb5f90b425cdd70bd5d111e99e7c9fa2b702b548ca5bc1be30c3287d30d33\" returns successfully" May 17 00:49:53.014326 kubelet[2445]: I0517 00:49:53.014271 2445 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8blqk" podStartSLOduration=5.014257899 podStartE2EDuration="5.014257899s" podCreationTimestamp="2025-05-17 00:49:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:49:49.770283528 +0000 UTC m=+9.170036848" watchObservedRunningTime="2025-05-17 00:49:53.014257899 +0000 UTC m=+12.414011259" May 17 00:49:53.145705 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3847045592.mount: Deactivated successfully. May 17 00:49:55.413144 env[1452]: time="2025-05-17T00:49:55.413078516Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:49:55.420061 env[1452]: time="2025-05-17T00:49:55.420007487Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:49:55.425450 env[1452]: time="2025-05-17T00:49:55.425393028Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:49:55.426245 env[1452]: time="2025-05-17T00:49:55.426213924Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" May 17 00:49:55.429319 env[1452]: time="2025-05-17T00:49:55.429284022Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 17 00:49:55.434260 env[1452]: time="2025-05-17T00:49:55.434231755Z" level=info msg="CreateContainer within sandbox \"83f1072bf4361dbdd067fa8d5f84d2af555a5b86fc2b41505826d62bc19815d1\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 17 00:49:55.460056 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount366912275.mount: Deactivated successfully. May 17 00:49:55.464936 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2578969830.mount: Deactivated successfully. May 17 00:49:55.483296 env[1452]: time="2025-05-17T00:49:55.483247082Z" level=info msg="CreateContainer within sandbox \"83f1072bf4361dbdd067fa8d5f84d2af555a5b86fc2b41505826d62bc19815d1\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ee022480d52e468911601964aff26f3fd3ff4d561be98c36bc72317df442262d\"" May 17 00:49:55.485131 env[1452]: time="2025-05-17T00:49:55.485087996Z" level=info msg="StartContainer for \"ee022480d52e468911601964aff26f3fd3ff4d561be98c36bc72317df442262d\"" May 17 00:49:55.500658 systemd[1]: Started cri-containerd-ee022480d52e468911601964aff26f3fd3ff4d561be98c36bc72317df442262d.scope. May 17 00:49:55.532555 env[1452]: time="2025-05-17T00:49:55.532499452Z" level=info msg="StartContainer for \"ee022480d52e468911601964aff26f3fd3ff4d561be98c36bc72317df442262d\" returns successfully" May 17 00:49:55.552655 systemd[1]: cri-containerd-ee022480d52e468911601964aff26f3fd3ff4d561be98c36bc72317df442262d.scope: Deactivated successfully. May 17 00:49:56.458297 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ee022480d52e468911601964aff26f3fd3ff4d561be98c36bc72317df442262d-rootfs.mount: Deactivated successfully. May 17 00:49:58.288813 env[1452]: time="2025-05-17T00:49:58.288761963Z" level=info msg="shim disconnected" id=ee022480d52e468911601964aff26f3fd3ff4d561be98c36bc72317df442262d May 17 00:49:58.289160 env[1452]: time="2025-05-17T00:49:58.289141490Z" level=warning msg="cleaning up after shim disconnected" id=ee022480d52e468911601964aff26f3fd3ff4d561be98c36bc72317df442262d namespace=k8s.io May 17 00:49:58.289256 env[1452]: time="2025-05-17T00:49:58.289242812Z" level=info msg="cleaning up dead shim" May 17 00:49:58.295951 env[1452]: time="2025-05-17T00:49:58.295917929Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:49:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2868 runtime=io.containerd.runc.v2\n" May 17 00:49:58.772286 env[1452]: time="2025-05-17T00:49:58.772240069Z" level=info msg="CreateContainer within sandbox \"83f1072bf4361dbdd067fa8d5f84d2af555a5b86fc2b41505826d62bc19815d1\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 17 00:49:58.797737 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount281073921.mount: Deactivated successfully. May 17 00:49:58.811956 env[1452]: time="2025-05-17T00:49:58.811914724Z" level=info msg="CreateContainer within sandbox \"83f1072bf4361dbdd067fa8d5f84d2af555a5b86fc2b41505826d62bc19815d1\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"101cfcea6cafe1b89fb5df44c9ea9c05e496cf403e19dc7a8345695ea438e26a\"" May 17 00:49:58.812627 env[1452]: time="2025-05-17T00:49:58.812599376Z" level=info msg="StartContainer for \"101cfcea6cafe1b89fb5df44c9ea9c05e496cf403e19dc7a8345695ea438e26a\"" May 17 00:49:58.832393 systemd[1]: Started cri-containerd-101cfcea6cafe1b89fb5df44c9ea9c05e496cf403e19dc7a8345695ea438e26a.scope. May 17 00:49:58.872141 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 00:49:58.872364 systemd[1]: Stopped systemd-sysctl.service. May 17 00:49:58.872511 systemd[1]: Stopping systemd-sysctl.service... May 17 00:49:58.874067 systemd[1]: Starting systemd-sysctl.service... May 17 00:49:58.876964 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 17 00:49:58.877110 env[1452]: time="2025-05-17T00:49:58.877077625Z" level=info msg="StartContainer for \"101cfcea6cafe1b89fb5df44c9ea9c05e496cf403e19dc7a8345695ea438e26a\" returns successfully" May 17 00:49:58.882612 systemd[1]: cri-containerd-101cfcea6cafe1b89fb5df44c9ea9c05e496cf403e19dc7a8345695ea438e26a.scope: Deactivated successfully. May 17 00:49:58.890193 systemd[1]: Finished systemd-sysctl.service. May 17 00:49:58.912959 env[1452]: time="2025-05-17T00:49:58.912917732Z" level=info msg="shim disconnected" id=101cfcea6cafe1b89fb5df44c9ea9c05e496cf403e19dc7a8345695ea438e26a May 17 00:49:58.913243 env[1452]: time="2025-05-17T00:49:58.913223017Z" level=warning msg="cleaning up after shim disconnected" id=101cfcea6cafe1b89fb5df44c9ea9c05e496cf403e19dc7a8345695ea438e26a namespace=k8s.io May 17 00:49:58.913331 env[1452]: time="2025-05-17T00:49:58.913318499Z" level=info msg="cleaning up dead shim" May 17 00:49:58.920532 env[1452]: time="2025-05-17T00:49:58.920503825Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:49:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2930 runtime=io.containerd.runc.v2\n" May 17 00:49:59.779652 env[1452]: time="2025-05-17T00:49:59.779598929Z" level=info msg="CreateContainer within sandbox \"83f1072bf4361dbdd067fa8d5f84d2af555a5b86fc2b41505826d62bc19815d1\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 17 00:49:59.794564 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-101cfcea6cafe1b89fb5df44c9ea9c05e496cf403e19dc7a8345695ea438e26a-rootfs.mount: Deactivated successfully. May 17 00:49:59.819560 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3403668576.mount: Deactivated successfully. May 17 00:49:59.840264 env[1452]: time="2025-05-17T00:49:59.840212844Z" level=info msg="CreateContainer within sandbox \"83f1072bf4361dbdd067fa8d5f84d2af555a5b86fc2b41505826d62bc19815d1\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e895950198c88732d5875ac6f48138bf72985d2740bdf6d5b7c008ad88abd3ab\"" May 17 00:49:59.841683 env[1452]: time="2025-05-17T00:49:59.841648348Z" level=info msg="StartContainer for \"e895950198c88732d5875ac6f48138bf72985d2740bdf6d5b7c008ad88abd3ab\"" May 17 00:49:59.864526 systemd[1]: Started cri-containerd-e895950198c88732d5875ac6f48138bf72985d2740bdf6d5b7c008ad88abd3ab.scope. May 17 00:49:59.897600 systemd[1]: cri-containerd-e895950198c88732d5875ac6f48138bf72985d2740bdf6d5b7c008ad88abd3ab.scope: Deactivated successfully. May 17 00:49:59.900947 env[1452]: time="2025-05-17T00:49:59.900901400Z" level=info msg="StartContainer for \"e895950198c88732d5875ac6f48138bf72985d2740bdf6d5b7c008ad88abd3ab\" returns successfully" May 17 00:49:59.945568 env[1452]: time="2025-05-17T00:49:59.945515522Z" level=info msg="shim disconnected" id=e895950198c88732d5875ac6f48138bf72985d2740bdf6d5b7c008ad88abd3ab May 17 00:49:59.945790 env[1452]: time="2025-05-17T00:49:59.945772846Z" level=warning msg="cleaning up after shim disconnected" id=e895950198c88732d5875ac6f48138bf72985d2740bdf6d5b7c008ad88abd3ab namespace=k8s.io May 17 00:49:59.945851 env[1452]: time="2025-05-17T00:49:59.945838647Z" level=info msg="cleaning up dead shim" May 17 00:49:59.952404 env[1452]: time="2025-05-17T00:49:59.952377919Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:49:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2989 runtime=io.containerd.runc.v2\n" May 17 00:50:00.780057 env[1452]: time="2025-05-17T00:50:00.775894494Z" level=info msg="CreateContainer within sandbox \"83f1072bf4361dbdd067fa8d5f84d2af555a5b86fc2b41505826d62bc19815d1\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 17 00:50:00.794250 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e895950198c88732d5875ac6f48138bf72985d2740bdf6d5b7c008ad88abd3ab-rootfs.mount: Deactivated successfully. May 17 00:50:00.839861 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2338268305.mount: Deactivated successfully. May 17 00:50:00.874661 env[1452]: time="2025-05-17T00:50:00.874618938Z" level=info msg="CreateContainer within sandbox \"83f1072bf4361dbdd067fa8d5f84d2af555a5b86fc2b41505826d62bc19815d1\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c1819190fb85e370c2d1ba760860a797232722de69a5d050e10a934bae3483a2\"" May 17 00:50:00.876663 env[1452]: time="2025-05-17T00:50:00.876630532Z" level=info msg="StartContainer for \"c1819190fb85e370c2d1ba760860a797232722de69a5d050e10a934bae3483a2\"" May 17 00:50:00.880577 env[1452]: time="2025-05-17T00:50:00.880552557Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:50:00.889466 env[1452]: time="2025-05-17T00:50:00.889420385Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:50:00.892023 systemd[1]: Started cri-containerd-c1819190fb85e370c2d1ba760860a797232722de69a5d050e10a934bae3483a2.scope. May 17 00:50:00.897568 env[1452]: time="2025-05-17T00:50:00.897528960Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:50:00.898128 env[1452]: time="2025-05-17T00:50:00.898098129Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" May 17 00:50:00.907194 env[1452]: time="2025-05-17T00:50:00.907136240Z" level=info msg="CreateContainer within sandbox \"fe857e360a586dc0e00ecb3734493f58f5e3b04a939d0e96bad6180bdfaadf0a\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 17 00:50:00.920369 systemd[1]: cri-containerd-c1819190fb85e370c2d1ba760860a797232722de69a5d050e10a934bae3483a2.scope: Deactivated successfully. May 17 00:50:00.928632 env[1452]: time="2025-05-17T00:50:00.921460358Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda6d17eff_e44c_499f_9d1f_5bccae4b1278.slice/cri-containerd-c1819190fb85e370c2d1ba760860a797232722de69a5d050e10a934bae3483a2.scope/memory.events\": no such file or directory" May 17 00:50:00.935222 env[1452]: time="2025-05-17T00:50:00.935185307Z" level=info msg="StartContainer for \"c1819190fb85e370c2d1ba760860a797232722de69a5d050e10a934bae3483a2\" returns successfully" May 17 00:50:01.286615 env[1452]: time="2025-05-17T00:50:01.286572882Z" level=info msg="shim disconnected" id=c1819190fb85e370c2d1ba760860a797232722de69a5d050e10a934bae3483a2 May 17 00:50:01.286830 env[1452]: time="2025-05-17T00:50:01.286812926Z" level=warning msg="cleaning up after shim disconnected" id=c1819190fb85e370c2d1ba760860a797232722de69a5d050e10a934bae3483a2 namespace=k8s.io May 17 00:50:01.286896 env[1452]: time="2025-05-17T00:50:01.286881847Z" level=info msg="cleaning up dead shim" May 17 00:50:01.293972 env[1452]: time="2025-05-17T00:50:01.293933562Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:50:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3047 runtime=io.containerd.runc.v2\n" May 17 00:50:01.308830 env[1452]: time="2025-05-17T00:50:01.308789123Z" level=info msg="CreateContainer within sandbox \"fe857e360a586dc0e00ecb3734493f58f5e3b04a939d0e96bad6180bdfaadf0a\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"99cd91e1d0306d7bb47018f3b544065689bdef2a3d4dba95b496dd4273f77829\"" May 17 00:50:01.310754 env[1452]: time="2025-05-17T00:50:01.309600737Z" level=info msg="StartContainer for \"99cd91e1d0306d7bb47018f3b544065689bdef2a3d4dba95b496dd4273f77829\"" May 17 00:50:01.326511 systemd[1]: Started cri-containerd-99cd91e1d0306d7bb47018f3b544065689bdef2a3d4dba95b496dd4273f77829.scope. May 17 00:50:01.353948 env[1452]: time="2025-05-17T00:50:01.353891296Z" level=info msg="StartContainer for \"99cd91e1d0306d7bb47018f3b544065689bdef2a3d4dba95b496dd4273f77829\" returns successfully" May 17 00:50:01.782339 env[1452]: time="2025-05-17T00:50:01.782287776Z" level=info msg="CreateContainer within sandbox \"83f1072bf4361dbdd067fa8d5f84d2af555a5b86fc2b41505826d62bc19815d1\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 17 00:50:01.826596 env[1452]: time="2025-05-17T00:50:01.826539495Z" level=info msg="CreateContainer within sandbox \"83f1072bf4361dbdd067fa8d5f84d2af555a5b86fc2b41505826d62bc19815d1\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4d002eaced0dd14bd094546fd7621467b9a899d24e54feb09e467695ebc58e28\"" May 17 00:50:01.827589 env[1452]: time="2025-05-17T00:50:01.827564792Z" level=info msg="StartContainer for \"4d002eaced0dd14bd094546fd7621467b9a899d24e54feb09e467695ebc58e28\"" May 17 00:50:01.859770 systemd[1]: Started cri-containerd-4d002eaced0dd14bd094546fd7621467b9a899d24e54feb09e467695ebc58e28.scope. May 17 00:50:01.873315 kubelet[2445]: I0517 00:50:01.873249 2445 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-p6pb2" podStartSLOduration=2.027665911 podStartE2EDuration="13.873230894s" podCreationTimestamp="2025-05-17 00:49:48 +0000 UTC" firstStartedPulling="2025-05-17 00:49:49.053813848 +0000 UTC m=+8.453567208" lastFinishedPulling="2025-05-17 00:50:00.899378871 +0000 UTC m=+20.299132191" observedRunningTime="2025-05-17 00:50:01.863579937 +0000 UTC m=+21.263333297" watchObservedRunningTime="2025-05-17 00:50:01.873230894 +0000 UTC m=+21.272984254" May 17 00:50:01.916090 env[1452]: time="2025-05-17T00:50:01.916036309Z" level=info msg="StartContainer for \"4d002eaced0dd14bd094546fd7621467b9a899d24e54feb09e467695ebc58e28\" returns successfully" May 17 00:50:02.184231 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! May 17 00:50:02.248990 kubelet[2445]: I0517 00:50:02.248778 2445 kubelet_node_status.go:501] "Fast updating node status as it just became ready" May 17 00:50:02.318721 systemd[1]: Created slice kubepods-burstable-pod3942e4b6_71db_4b17_86f2_87768b7912a1.slice. May 17 00:50:02.326438 systemd[1]: Created slice kubepods-burstable-pod556e8389_777b_487b_b251_eb751ddf636e.slice. May 17 00:50:02.431680 kubelet[2445]: I0517 00:50:02.431634 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7hl2\" (UniqueName: \"kubernetes.io/projected/556e8389-777b-487b-b251-eb751ddf636e-kube-api-access-m7hl2\") pod \"coredns-674b8bbfcf-zht5z\" (UID: \"556e8389-777b-487b-b251-eb751ddf636e\") " pod="kube-system/coredns-674b8bbfcf-zht5z" May 17 00:50:02.431680 kubelet[2445]: I0517 00:50:02.431680 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/556e8389-777b-487b-b251-eb751ddf636e-config-volume\") pod \"coredns-674b8bbfcf-zht5z\" (UID: \"556e8389-777b-487b-b251-eb751ddf636e\") " pod="kube-system/coredns-674b8bbfcf-zht5z" May 17 00:50:02.431861 kubelet[2445]: I0517 00:50:02.431701 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3942e4b6-71db-4b17-86f2-87768b7912a1-config-volume\") pod \"coredns-674b8bbfcf-vmrkd\" (UID: \"3942e4b6-71db-4b17-86f2-87768b7912a1\") " pod="kube-system/coredns-674b8bbfcf-vmrkd" May 17 00:50:02.431861 kubelet[2445]: I0517 00:50:02.431717 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgwvf\" (UniqueName: \"kubernetes.io/projected/3942e4b6-71db-4b17-86f2-87768b7912a1-kube-api-access-vgwvf\") pod \"coredns-674b8bbfcf-vmrkd\" (UID: \"3942e4b6-71db-4b17-86f2-87768b7912a1\") " pod="kube-system/coredns-674b8bbfcf-vmrkd" May 17 00:50:02.622857 env[1452]: time="2025-05-17T00:50:02.622807586Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vmrkd,Uid:3942e4b6-71db-4b17-86f2-87768b7912a1,Namespace:kube-system,Attempt:0,}" May 17 00:50:02.630084 env[1452]: time="2025-05-17T00:50:02.629894018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zht5z,Uid:556e8389-777b-487b-b251-eb751ddf636e,Namespace:kube-system,Attempt:0,}" May 17 00:50:02.759194 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! May 17 00:50:02.808298 kubelet[2445]: I0517 00:50:02.808234 2445 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-ql9dt" podStartSLOduration=8.323225928 podStartE2EDuration="14.808217485s" podCreationTimestamp="2025-05-17 00:49:48 +0000 UTC" firstStartedPulling="2025-05-17 00:49:48.942029982 +0000 UTC m=+8.341783342" lastFinishedPulling="2025-05-17 00:49:55.427021579 +0000 UTC m=+14.826774899" observedRunningTime="2025-05-17 00:50:02.806743701 +0000 UTC m=+22.206497021" watchObservedRunningTime="2025-05-17 00:50:02.808217485 +0000 UTC m=+22.207970885" May 17 00:50:05.229662 systemd-networkd[1623]: cilium_host: Link UP May 17 00:50:05.229780 systemd-networkd[1623]: cilium_net: Link UP May 17 00:50:05.229783 systemd-networkd[1623]: cilium_net: Gained carrier May 17 00:50:05.229896 systemd-networkd[1623]: cilium_host: Gained carrier May 17 00:50:05.230077 systemd-networkd[1623]: cilium_host: Gained IPv6LL May 17 00:50:05.230222 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready May 17 00:50:05.394366 systemd-networkd[1623]: cilium_vxlan: Link UP May 17 00:50:05.394373 systemd-networkd[1623]: cilium_vxlan: Gained carrier May 17 00:50:05.639192 kernel: NET: Registered PF_ALG protocol family May 17 00:50:05.859385 systemd-networkd[1623]: cilium_net: Gained IPv6LL May 17 00:50:06.336268 systemd-networkd[1623]: lxc_health: Link UP May 17 00:50:06.359202 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 17 00:50:06.359343 systemd-networkd[1623]: lxc_health: Gained carrier May 17 00:50:06.716967 systemd-networkd[1623]: lxcda8e1663cade: Link UP May 17 00:50:06.726768 kernel: eth0: renamed from tmp7d963 May 17 00:50:06.735608 systemd-networkd[1623]: lxcda8e1663cade: Gained carrier May 17 00:50:06.736581 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcda8e1663cade: link becomes ready May 17 00:50:06.744255 systemd-networkd[1623]: lxcb0f7302c7997: Link UP May 17 00:50:06.759565 kernel: eth0: renamed from tmp2ebf0 May 17 00:50:06.768306 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcb0f7302c7997: link becomes ready May 17 00:50:06.767532 systemd-networkd[1623]: lxcb0f7302c7997: Gained carrier May 17 00:50:07.203367 systemd-networkd[1623]: cilium_vxlan: Gained IPv6LL May 17 00:50:07.843319 systemd-networkd[1623]: lxcb0f7302c7997: Gained IPv6LL May 17 00:50:07.971364 systemd-networkd[1623]: lxc_health: Gained IPv6LL May 17 00:50:08.291324 systemd-networkd[1623]: lxcda8e1663cade: Gained IPv6LL May 17 00:50:10.227756 env[1452]: time="2025-05-17T00:50:10.227591682Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:50:10.227756 env[1452]: time="2025-05-17T00:50:10.227634723Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:50:10.227756 env[1452]: time="2025-05-17T00:50:10.227644563Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:50:10.228279 env[1452]: time="2025-05-17T00:50:10.228207890Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2ebf0a7dc3516855030803f885d50610e53caf90a961dc9f00100eef6f260a56 pid=3641 runtime=io.containerd.runc.v2 May 17 00:50:10.232771 env[1452]: time="2025-05-17T00:50:10.232530267Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:50:10.232771 env[1452]: time="2025-05-17T00:50:10.232600428Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:50:10.232771 env[1452]: time="2025-05-17T00:50:10.232611308Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:50:10.232771 env[1452]: time="2025-05-17T00:50:10.232724910Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7d96381e2730eda04d2a5dec0433aeca3df0abe3ddc86653b3d814859180d57c pid=3658 runtime=io.containerd.runc.v2 May 17 00:50:10.253433 systemd[1]: Started cri-containerd-7d96381e2730eda04d2a5dec0433aeca3df0abe3ddc86653b3d814859180d57c.scope. May 17 00:50:10.273491 systemd[1]: Started cri-containerd-2ebf0a7dc3516855030803f885d50610e53caf90a961dc9f00100eef6f260a56.scope. May 17 00:50:10.323830 env[1452]: time="2025-05-17T00:50:10.323777183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zht5z,Uid:556e8389-777b-487b-b251-eb751ddf636e,Namespace:kube-system,Attempt:0,} returns sandbox id \"2ebf0a7dc3516855030803f885d50610e53caf90a961dc9f00100eef6f260a56\"" May 17 00:50:10.323954 env[1452]: time="2025-05-17T00:50:10.323891064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vmrkd,Uid:3942e4b6-71db-4b17-86f2-87768b7912a1,Namespace:kube-system,Attempt:0,} returns sandbox id \"7d96381e2730eda04d2a5dec0433aeca3df0abe3ddc86653b3d814859180d57c\"" May 17 00:50:10.339462 env[1452]: time="2025-05-17T00:50:10.339404068Z" level=info msg="CreateContainer within sandbox \"7d96381e2730eda04d2a5dec0433aeca3df0abe3ddc86653b3d814859180d57c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 00:50:10.343680 env[1452]: time="2025-05-17T00:50:10.343641523Z" level=info msg="CreateContainer within sandbox \"2ebf0a7dc3516855030803f885d50610e53caf90a961dc9f00100eef6f260a56\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 00:50:10.383571 env[1452]: time="2025-05-17T00:50:10.383535206Z" level=info msg="CreateContainer within sandbox \"7d96381e2730eda04d2a5dec0433aeca3df0abe3ddc86653b3d814859180d57c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d68e88cd52c788d152ec5cbbffa6cebecf74bec3c756cc6049328412d62faec7\"" May 17 00:50:10.384786 env[1452]: time="2025-05-17T00:50:10.384437458Z" level=info msg="StartContainer for \"d68e88cd52c788d152ec5cbbffa6cebecf74bec3c756cc6049328412d62faec7\"" May 17 00:50:10.395053 env[1452]: time="2025-05-17T00:50:10.395012276Z" level=info msg="CreateContainer within sandbox \"2ebf0a7dc3516855030803f885d50610e53caf90a961dc9f00100eef6f260a56\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4a2c08c6806bfaf2bfdf06784e78311709c80af85ad2cbf33e69f94882fac3bd\"" May 17 00:50:10.395880 env[1452]: time="2025-05-17T00:50:10.395833687Z" level=info msg="StartContainer for \"4a2c08c6806bfaf2bfdf06784e78311709c80af85ad2cbf33e69f94882fac3bd\"" May 17 00:50:10.401462 systemd[1]: Started cri-containerd-d68e88cd52c788d152ec5cbbffa6cebecf74bec3c756cc6049328412d62faec7.scope. May 17 00:50:10.422806 systemd[1]: Started cri-containerd-4a2c08c6806bfaf2bfdf06784e78311709c80af85ad2cbf33e69f94882fac3bd.scope. May 17 00:50:10.447904 env[1452]: time="2025-05-17T00:50:10.447849849Z" level=info msg="StartContainer for \"d68e88cd52c788d152ec5cbbffa6cebecf74bec3c756cc6049328412d62faec7\" returns successfully" May 17 00:50:10.463325 env[1452]: time="2025-05-17T00:50:10.463272571Z" level=info msg="StartContainer for \"4a2c08c6806bfaf2bfdf06784e78311709c80af85ad2cbf33e69f94882fac3bd\" returns successfully" May 17 00:50:10.814012 kubelet[2445]: I0517 00:50:10.813950 2445 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-zht5z" podStartSLOduration=22.813937286 podStartE2EDuration="22.813937286s" podCreationTimestamp="2025-05-17 00:49:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:50:10.813513161 +0000 UTC m=+30.213266481" watchObservedRunningTime="2025-05-17 00:50:10.813937286 +0000 UTC m=+30.213690646" May 17 00:50:10.832864 kubelet[2445]: I0517 00:50:10.832800 2445 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-vmrkd" podStartSLOduration=22.832789653 podStartE2EDuration="22.832789653s" podCreationTimestamp="2025-05-17 00:49:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:50:10.832480849 +0000 UTC m=+30.232234209" watchObservedRunningTime="2025-05-17 00:50:10.832789653 +0000 UTC m=+30.232542973" May 17 00:51:45.086728 systemd[1]: Started sshd@5-10.200.20.19:22-10.200.16.10:35560.service. May 17 00:51:45.573194 sshd[3814]: Accepted publickey for core from 10.200.16.10 port 35560 ssh2: RSA SHA256:kTalk4vvVOHJD+odK+kI4Z4CxTmNI3TSVyFiPn8PnHg May 17 00:51:45.574907 sshd[3814]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:51:45.579257 systemd[1]: Started session-8.scope. May 17 00:51:45.579564 systemd-logind[1440]: New session 8 of user core. May 17 00:51:46.078833 sshd[3814]: pam_unix(sshd:session): session closed for user core May 17 00:51:46.081682 systemd-logind[1440]: Session 8 logged out. Waiting for processes to exit. May 17 00:51:46.082907 systemd[1]: session-8.scope: Deactivated successfully. May 17 00:51:46.083687 systemd[1]: sshd@5-10.200.20.19:22-10.200.16.10:35560.service: Deactivated successfully. May 17 00:51:46.084745 systemd-logind[1440]: Removed session 8. May 17 00:51:51.154303 systemd[1]: Started sshd@6-10.200.20.19:22-10.200.16.10:58148.service. May 17 00:51:51.609143 sshd[3829]: Accepted publickey for core from 10.200.16.10 port 58148 ssh2: RSA SHA256:kTalk4vvVOHJD+odK+kI4Z4CxTmNI3TSVyFiPn8PnHg May 17 00:51:51.610821 sshd[3829]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:51:51.615075 systemd[1]: Started session-9.scope. May 17 00:51:51.616241 systemd-logind[1440]: New session 9 of user core. May 17 00:51:52.011873 sshd[3829]: pam_unix(sshd:session): session closed for user core May 17 00:51:52.014625 systemd[1]: session-9.scope: Deactivated successfully. May 17 00:51:52.014627 systemd-logind[1440]: Session 9 logged out. Waiting for processes to exit. May 17 00:51:52.015236 systemd[1]: sshd@6-10.200.20.19:22-10.200.16.10:58148.service: Deactivated successfully. May 17 00:51:52.016251 systemd-logind[1440]: Removed session 9. May 17 00:51:57.087638 systemd[1]: Started sshd@7-10.200.20.19:22-10.200.16.10:58160.service. May 17 00:51:57.540775 sshd[3841]: Accepted publickey for core from 10.200.16.10 port 58160 ssh2: RSA SHA256:kTalk4vvVOHJD+odK+kI4Z4CxTmNI3TSVyFiPn8PnHg May 17 00:51:57.542559 sshd[3841]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:51:57.546876 systemd[1]: Started session-10.scope. May 17 00:51:57.547326 systemd-logind[1440]: New session 10 of user core. May 17 00:51:57.942119 sshd[3841]: pam_unix(sshd:session): session closed for user core May 17 00:51:57.944738 systemd-logind[1440]: Session 10 logged out. Waiting for processes to exit. May 17 00:51:57.944890 systemd[1]: session-10.scope: Deactivated successfully. May 17 00:51:57.945700 systemd[1]: sshd@7-10.200.20.19:22-10.200.16.10:58160.service: Deactivated successfully. May 17 00:51:57.946962 systemd-logind[1440]: Removed session 10. May 17 00:52:03.022857 systemd[1]: Started sshd@8-10.200.20.19:22-10.200.16.10:49092.service. May 17 00:52:03.504608 sshd[3854]: Accepted publickey for core from 10.200.16.10 port 49092 ssh2: RSA SHA256:kTalk4vvVOHJD+odK+kI4Z4CxTmNI3TSVyFiPn8PnHg May 17 00:52:03.505988 sshd[3854]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:52:03.510572 systemd[1]: Started session-11.scope. May 17 00:52:03.511041 systemd-logind[1440]: New session 11 of user core. May 17 00:52:03.926491 sshd[3854]: pam_unix(sshd:session): session closed for user core May 17 00:52:03.929591 systemd[1]: sshd@8-10.200.20.19:22-10.200.16.10:49092.service: Deactivated successfully. May 17 00:52:03.930382 systemd[1]: session-11.scope: Deactivated successfully. May 17 00:52:03.931346 systemd-logind[1440]: Session 11 logged out. Waiting for processes to exit. May 17 00:52:03.932128 systemd-logind[1440]: Removed session 11. May 17 00:52:04.008336 systemd[1]: Started sshd@9-10.200.20.19:22-10.200.16.10:49102.service. May 17 00:52:04.498073 sshd[3866]: Accepted publickey for core from 10.200.16.10 port 49102 ssh2: RSA SHA256:kTalk4vvVOHJD+odK+kI4Z4CxTmNI3TSVyFiPn8PnHg May 17 00:52:04.499762 sshd[3866]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:52:04.504214 systemd[1]: Started session-12.scope. May 17 00:52:04.505093 systemd-logind[1440]: New session 12 of user core. May 17 00:52:04.946874 sshd[3866]: pam_unix(sshd:session): session closed for user core May 17 00:52:04.949826 systemd-logind[1440]: Session 12 logged out. Waiting for processes to exit. May 17 00:52:04.950005 systemd[1]: sshd@9-10.200.20.19:22-10.200.16.10:49102.service: Deactivated successfully. May 17 00:52:04.950770 systemd[1]: session-12.scope: Deactivated successfully. May 17 00:52:04.951543 systemd-logind[1440]: Removed session 12. May 17 00:52:05.027916 systemd[1]: Started sshd@10-10.200.20.19:22-10.200.16.10:49104.service. May 17 00:52:05.479506 sshd[3876]: Accepted publickey for core from 10.200.16.10 port 49104 ssh2: RSA SHA256:kTalk4vvVOHJD+odK+kI4Z4CxTmNI3TSVyFiPn8PnHg May 17 00:52:05.481225 sshd[3876]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:52:05.485705 systemd[1]: Started session-13.scope. May 17 00:52:05.486134 systemd-logind[1440]: New session 13 of user core. May 17 00:52:05.890728 sshd[3876]: pam_unix(sshd:session): session closed for user core May 17 00:52:05.893337 systemd[1]: sshd@10-10.200.20.19:22-10.200.16.10:49104.service: Deactivated successfully. May 17 00:52:05.894034 systemd[1]: session-13.scope: Deactivated successfully. May 17 00:52:05.894571 systemd-logind[1440]: Session 13 logged out. Waiting for processes to exit. May 17 00:52:05.895261 systemd-logind[1440]: Removed session 13. May 17 00:52:10.970480 systemd[1]: Started sshd@11-10.200.20.19:22-10.200.16.10:56998.service. May 17 00:52:11.452430 sshd[3888]: Accepted publickey for core from 10.200.16.10 port 56998 ssh2: RSA SHA256:kTalk4vvVOHJD+odK+kI4Z4CxTmNI3TSVyFiPn8PnHg May 17 00:52:11.454309 sshd[3888]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:52:11.460144 systemd[1]: Started session-14.scope. May 17 00:52:11.461056 systemd-logind[1440]: New session 14 of user core. May 17 00:52:11.869660 sshd[3888]: pam_unix(sshd:session): session closed for user core May 17 00:52:11.872322 systemd-logind[1440]: Session 14 logged out. Waiting for processes to exit. May 17 00:52:11.872506 systemd[1]: sshd@11-10.200.20.19:22-10.200.16.10:56998.service: Deactivated successfully. May 17 00:52:11.873276 systemd[1]: session-14.scope: Deactivated successfully. May 17 00:52:11.873939 systemd-logind[1440]: Removed session 14. May 17 00:52:16.950734 systemd[1]: Started sshd@12-10.200.20.19:22-10.200.16.10:57004.service. May 17 00:52:17.440092 sshd[3900]: Accepted publickey for core from 10.200.16.10 port 57004 ssh2: RSA SHA256:kTalk4vvVOHJD+odK+kI4Z4CxTmNI3TSVyFiPn8PnHg May 17 00:52:17.441427 sshd[3900]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:52:17.445641 systemd-logind[1440]: New session 15 of user core. May 17 00:52:17.446403 systemd[1]: Started session-15.scope. May 17 00:52:17.850391 sshd[3900]: pam_unix(sshd:session): session closed for user core May 17 00:52:17.853089 systemd-logind[1440]: Session 15 logged out. Waiting for processes to exit. May 17 00:52:17.853288 systemd[1]: sshd@12-10.200.20.19:22-10.200.16.10:57004.service: Deactivated successfully. May 17 00:52:17.853974 systemd[1]: session-15.scope: Deactivated successfully. May 17 00:52:17.854852 systemd-logind[1440]: Removed session 15. May 17 00:52:17.929550 systemd[1]: Started sshd@13-10.200.20.19:22-10.200.16.10:57006.service. May 17 00:52:18.410484 sshd[3911]: Accepted publickey for core from 10.200.16.10 port 57006 ssh2: RSA SHA256:kTalk4vvVOHJD+odK+kI4Z4CxTmNI3TSVyFiPn8PnHg May 17 00:52:18.412069 sshd[3911]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:52:18.416318 systemd[1]: Started session-16.scope. May 17 00:52:18.416999 systemd-logind[1440]: New session 16 of user core. May 17 00:52:18.853250 sshd[3911]: pam_unix(sshd:session): session closed for user core May 17 00:52:18.855760 systemd[1]: sshd@13-10.200.20.19:22-10.200.16.10:57006.service: Deactivated successfully. May 17 00:52:18.856504 systemd[1]: session-16.scope: Deactivated successfully. May 17 00:52:18.857259 systemd-logind[1440]: Session 16 logged out. Waiting for processes to exit. May 17 00:52:18.858066 systemd-logind[1440]: Removed session 16. May 17 00:52:18.934188 systemd[1]: Started sshd@14-10.200.20.19:22-10.200.16.10:47390.service. May 17 00:52:19.426556 sshd[3920]: Accepted publickey for core from 10.200.16.10 port 47390 ssh2: RSA SHA256:kTalk4vvVOHJD+odK+kI4Z4CxTmNI3TSVyFiPn8PnHg May 17 00:52:19.427881 sshd[3920]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:52:19.431721 systemd-logind[1440]: New session 17 of user core. May 17 00:52:19.432158 systemd[1]: Started session-17.scope. May 17 00:52:20.633310 sshd[3920]: pam_unix(sshd:session): session closed for user core May 17 00:52:20.636145 systemd[1]: sshd@14-10.200.20.19:22-10.200.16.10:47390.service: Deactivated successfully. May 17 00:52:20.636942 systemd[1]: session-17.scope: Deactivated successfully. May 17 00:52:20.637517 systemd-logind[1440]: Session 17 logged out. Waiting for processes to exit. May 17 00:52:20.638511 systemd-logind[1440]: Removed session 17. May 17 00:52:20.713253 systemd[1]: Started sshd@15-10.200.20.19:22-10.200.16.10:47396.service. May 17 00:52:21.197650 sshd[3939]: Accepted publickey for core from 10.200.16.10 port 47396 ssh2: RSA SHA256:kTalk4vvVOHJD+odK+kI4Z4CxTmNI3TSVyFiPn8PnHg May 17 00:52:21.198932 sshd[3939]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:52:21.203308 systemd[1]: Started session-18.scope. May 17 00:52:21.203614 systemd-logind[1440]: New session 18 of user core. May 17 00:52:21.734010 sshd[3939]: pam_unix(sshd:session): session closed for user core May 17 00:52:21.737134 systemd-logind[1440]: Session 18 logged out. Waiting for processes to exit. May 17 00:52:21.737758 systemd[1]: sshd@15-10.200.20.19:22-10.200.16.10:47396.service: Deactivated successfully. May 17 00:52:21.738507 systemd[1]: session-18.scope: Deactivated successfully. May 17 00:52:21.739240 systemd-logind[1440]: Removed session 18. May 17 00:52:21.813455 systemd[1]: Started sshd@16-10.200.20.19:22-10.200.16.10:47406.service. May 17 00:52:22.296516 sshd[3949]: Accepted publickey for core from 10.200.16.10 port 47406 ssh2: RSA SHA256:kTalk4vvVOHJD+odK+kI4Z4CxTmNI3TSVyFiPn8PnHg May 17 00:52:22.298145 sshd[3949]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:52:22.302352 systemd[1]: Started session-19.scope. May 17 00:52:22.302753 systemd-logind[1440]: New session 19 of user core. May 17 00:52:22.714452 sshd[3949]: pam_unix(sshd:session): session closed for user core May 17 00:52:22.717344 systemd[1]: sshd@16-10.200.20.19:22-10.200.16.10:47406.service: Deactivated successfully. May 17 00:52:22.718074 systemd[1]: session-19.scope: Deactivated successfully. May 17 00:52:22.718925 systemd-logind[1440]: Session 19 logged out. Waiting for processes to exit. May 17 00:52:22.719716 systemd-logind[1440]: Removed session 19. May 17 00:52:27.789514 systemd[1]: Started sshd@17-10.200.20.19:22-10.200.16.10:47422.service. May 17 00:52:28.245202 sshd[3964]: Accepted publickey for core from 10.200.16.10 port 47422 ssh2: RSA SHA256:kTalk4vvVOHJD+odK+kI4Z4CxTmNI3TSVyFiPn8PnHg May 17 00:52:28.246515 sshd[3964]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:52:28.250892 systemd[1]: Started session-20.scope. May 17 00:52:28.251035 systemd-logind[1440]: New session 20 of user core. May 17 00:52:28.644094 sshd[3964]: pam_unix(sshd:session): session closed for user core May 17 00:52:28.647233 systemd-logind[1440]: Session 20 logged out. Waiting for processes to exit. May 17 00:52:28.647239 systemd[1]: sshd@17-10.200.20.19:22-10.200.16.10:47422.service: Deactivated successfully. May 17 00:52:28.647902 systemd[1]: session-20.scope: Deactivated successfully. May 17 00:52:28.648615 systemd-logind[1440]: Removed session 20. May 17 00:52:33.719324 systemd[1]: Started sshd@18-10.200.20.19:22-10.200.16.10:42966.service. May 17 00:52:34.169389 sshd[3976]: Accepted publickey for core from 10.200.16.10 port 42966 ssh2: RSA SHA256:kTalk4vvVOHJD+odK+kI4Z4CxTmNI3TSVyFiPn8PnHg May 17 00:52:34.170717 sshd[3976]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:52:34.174999 systemd-logind[1440]: New session 21 of user core. May 17 00:52:34.175418 systemd[1]: Started session-21.scope. May 17 00:52:34.567603 sshd[3976]: pam_unix(sshd:session): session closed for user core May 17 00:52:34.570692 systemd[1]: sshd@18-10.200.20.19:22-10.200.16.10:42966.service: Deactivated successfully. May 17 00:52:34.571447 systemd[1]: session-21.scope: Deactivated successfully. May 17 00:52:34.571969 systemd-logind[1440]: Session 21 logged out. Waiting for processes to exit. May 17 00:52:34.572677 systemd-logind[1440]: Removed session 21. May 17 00:52:34.642628 systemd[1]: Started sshd@19-10.200.20.19:22-10.200.16.10:42970.service. May 17 00:52:35.095155 sshd[3988]: Accepted publickey for core from 10.200.16.10 port 42970 ssh2: RSA SHA256:kTalk4vvVOHJD+odK+kI4Z4CxTmNI3TSVyFiPn8PnHg May 17 00:52:35.096792 sshd[3988]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:52:35.101007 systemd[1]: Started session-22.scope. May 17 00:52:35.101354 systemd-logind[1440]: New session 22 of user core. May 17 00:52:38.120125 env[1452]: time="2025-05-17T00:52:38.120073124Z" level=info msg="StopContainer for \"99cd91e1d0306d7bb47018f3b544065689bdef2a3d4dba95b496dd4273f77829\" with timeout 30 (s)" May 17 00:52:38.120849 env[1452]: time="2025-05-17T00:52:38.120708507Z" level=info msg="Stop container \"99cd91e1d0306d7bb47018f3b544065689bdef2a3d4dba95b496dd4273f77829\" with signal terminated" May 17 00:52:38.132337 systemd[1]: cri-containerd-99cd91e1d0306d7bb47018f3b544065689bdef2a3d4dba95b496dd4273f77829.scope: Deactivated successfully. May 17 00:52:38.133557 env[1452]: time="2025-05-17T00:52:38.133500019Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 00:52:38.141917 env[1452]: time="2025-05-17T00:52:38.141875564Z" level=info msg="StopContainer for \"4d002eaced0dd14bd094546fd7621467b9a899d24e54feb09e467695ebc58e28\" with timeout 2 (s)" May 17 00:52:38.142201 env[1452]: time="2025-05-17T00:52:38.142151917Z" level=info msg="Stop container \"4d002eaced0dd14bd094546fd7621467b9a899d24e54feb09e467695ebc58e28\" with signal terminated" May 17 00:52:38.150735 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-99cd91e1d0306d7bb47018f3b544065689bdef2a3d4dba95b496dd4273f77829-rootfs.mount: Deactivated successfully. May 17 00:52:38.154998 systemd-networkd[1623]: lxc_health: Link DOWN May 17 00:52:38.155006 systemd-networkd[1623]: lxc_health: Lost carrier May 17 00:52:38.180644 systemd[1]: cri-containerd-4d002eaced0dd14bd094546fd7621467b9a899d24e54feb09e467695ebc58e28.scope: Deactivated successfully. May 17 00:52:38.180946 systemd[1]: cri-containerd-4d002eaced0dd14bd094546fd7621467b9a899d24e54feb09e467695ebc58e28.scope: Consumed 5.996s CPU time. May 17 00:52:38.189875 env[1452]: time="2025-05-17T00:52:38.189833092Z" level=info msg="shim disconnected" id=99cd91e1d0306d7bb47018f3b544065689bdef2a3d4dba95b496dd4273f77829 May 17 00:52:38.190063 env[1452]: time="2025-05-17T00:52:38.190043327Z" level=warning msg="cleaning up after shim disconnected" id=99cd91e1d0306d7bb47018f3b544065689bdef2a3d4dba95b496dd4273f77829 namespace=k8s.io May 17 00:52:38.190155 env[1452]: time="2025-05-17T00:52:38.190139684Z" level=info msg="cleaning up dead shim" May 17 00:52:38.199871 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4d002eaced0dd14bd094546fd7621467b9a899d24e54feb09e467695ebc58e28-rootfs.mount: Deactivated successfully. May 17 00:52:38.203638 env[1452]: time="2025-05-17T00:52:38.203604978Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:52:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4050 runtime=io.containerd.runc.v2\n" May 17 00:52:38.232893 env[1452]: time="2025-05-17T00:52:38.232795949Z" level=info msg="StopContainer for \"99cd91e1d0306d7bb47018f3b544065689bdef2a3d4dba95b496dd4273f77829\" returns successfully" May 17 00:52:38.236574 env[1452]: time="2025-05-17T00:52:38.234624982Z" level=info msg="StopPodSandbox for \"fe857e360a586dc0e00ecb3734493f58f5e3b04a939d0e96bad6180bdfaadf0a\"" May 17 00:52:38.236574 env[1452]: time="2025-05-17T00:52:38.234718859Z" level=info msg="Container to stop \"99cd91e1d0306d7bb47018f3b544065689bdef2a3d4dba95b496dd4273f77829\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:52:38.236930 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fe857e360a586dc0e00ecb3734493f58f5e3b04a939d0e96bad6180bdfaadf0a-shm.mount: Deactivated successfully. May 17 00:52:38.245010 systemd[1]: cri-containerd-fe857e360a586dc0e00ecb3734493f58f5e3b04a939d0e96bad6180bdfaadf0a.scope: Deactivated successfully. May 17 00:52:38.263754 env[1452]: time="2025-05-17T00:52:38.263707915Z" level=info msg="shim disconnected" id=fe857e360a586dc0e00ecb3734493f58f5e3b04a939d0e96bad6180bdfaadf0a May 17 00:52:38.264133 env[1452]: time="2025-05-17T00:52:38.264112304Z" level=warning msg="cleaning up after shim disconnected" id=fe857e360a586dc0e00ecb3734493f58f5e3b04a939d0e96bad6180bdfaadf0a namespace=k8s.io May 17 00:52:38.264234 env[1452]: time="2025-05-17T00:52:38.264204862Z" level=info msg="cleaning up dead shim" May 17 00:52:38.264900 env[1452]: time="2025-05-17T00:52:38.263761873Z" level=info msg="shim disconnected" id=4d002eaced0dd14bd094546fd7621467b9a899d24e54feb09e467695ebc58e28 May 17 00:52:38.264991 env[1452]: time="2025-05-17T00:52:38.264974962Z" level=warning msg="cleaning up after shim disconnected" id=4d002eaced0dd14bd094546fd7621467b9a899d24e54feb09e467695ebc58e28 namespace=k8s.io May 17 00:52:38.265056 env[1452]: time="2025-05-17T00:52:38.265043280Z" level=info msg="cleaning up dead shim" May 17 00:52:38.270943 env[1452]: time="2025-05-17T00:52:38.270901170Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:52:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4087 runtime=io.containerd.runc.v2\n" May 17 00:52:38.271225 env[1452]: time="2025-05-17T00:52:38.271164883Z" level=info msg="TearDown network for sandbox \"fe857e360a586dc0e00ecb3734493f58f5e3b04a939d0e96bad6180bdfaadf0a\" successfully" May 17 00:52:38.271225 env[1452]: time="2025-05-17T00:52:38.271221562Z" level=info msg="StopPodSandbox for \"fe857e360a586dc0e00ecb3734493f58f5e3b04a939d0e96bad6180bdfaadf0a\" returns successfully" May 17 00:52:38.277772 env[1452]: time="2025-05-17T00:52:38.277743514Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:52:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4088 runtime=io.containerd.runc.v2\n" May 17 00:52:38.284117 env[1452]: time="2025-05-17T00:52:38.284075832Z" level=info msg="StopContainer for \"4d002eaced0dd14bd094546fd7621467b9a899d24e54feb09e467695ebc58e28\" returns successfully" May 17 00:52:38.284447 env[1452]: time="2025-05-17T00:52:38.284417903Z" level=info msg="StopPodSandbox for \"83f1072bf4361dbdd067fa8d5f84d2af555a5b86fc2b41505826d62bc19815d1\"" May 17 00:52:38.284486 env[1452]: time="2025-05-17T00:52:38.284471221Z" level=info msg="Container to stop \"101cfcea6cafe1b89fb5df44c9ea9c05e496cf403e19dc7a8345695ea438e26a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:52:38.284513 env[1452]: time="2025-05-17T00:52:38.284484621Z" level=info msg="Container to stop \"e895950198c88732d5875ac6f48138bf72985d2740bdf6d5b7c008ad88abd3ab\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:52:38.284513 env[1452]: time="2025-05-17T00:52:38.284495861Z" level=info msg="Container to stop \"ee022480d52e468911601964aff26f3fd3ff4d561be98c36bc72317df442262d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:52:38.284513 env[1452]: time="2025-05-17T00:52:38.284507620Z" level=info msg="Container to stop \"c1819190fb85e370c2d1ba760860a797232722de69a5d050e10a934bae3483a2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:52:38.284605 env[1452]: time="2025-05-17T00:52:38.284518180Z" level=info msg="Container to stop \"4d002eaced0dd14bd094546fd7621467b9a899d24e54feb09e467695ebc58e28\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:52:38.288951 systemd[1]: cri-containerd-83f1072bf4361dbdd067fa8d5f84d2af555a5b86fc2b41505826d62bc19815d1.scope: Deactivated successfully. May 17 00:52:38.324199 env[1452]: time="2025-05-17T00:52:38.324135523Z" level=info msg="shim disconnected" id=83f1072bf4361dbdd067fa8d5f84d2af555a5b86fc2b41505826d62bc19815d1 May 17 00:52:38.324362 env[1452]: time="2025-05-17T00:52:38.324233560Z" level=warning msg="cleaning up after shim disconnected" id=83f1072bf4361dbdd067fa8d5f84d2af555a5b86fc2b41505826d62bc19815d1 namespace=k8s.io May 17 00:52:38.324362 env[1452]: time="2025-05-17T00:52:38.324244640Z" level=info msg="cleaning up dead shim" May 17 00:52:38.331614 env[1452]: time="2025-05-17T00:52:38.331569412Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:52:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4130 runtime=io.containerd.runc.v2\n" May 17 00:52:38.331892 env[1452]: time="2025-05-17T00:52:38.331866084Z" level=info msg="TearDown network for sandbox \"83f1072bf4361dbdd067fa8d5f84d2af555a5b86fc2b41505826d62bc19815d1\" successfully" May 17 00:52:38.331941 env[1452]: time="2025-05-17T00:52:38.331892204Z" level=info msg="StopPodSandbox for \"83f1072bf4361dbdd067fa8d5f84d2af555a5b86fc2b41505826d62bc19815d1\" returns successfully" May 17 00:52:38.337583 kubelet[2445]: I0517 00:52:38.337254 2445 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/176de6be-8d2d-455b-ae9b-e09bae723cb5-cilium-config-path\") pod \"176de6be-8d2d-455b-ae9b-e09bae723cb5\" (UID: \"176de6be-8d2d-455b-ae9b-e09bae723cb5\") " May 17 00:52:38.337583 kubelet[2445]: I0517 00:52:38.337286 2445 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grxth\" (UniqueName: \"kubernetes.io/projected/176de6be-8d2d-455b-ae9b-e09bae723cb5-kube-api-access-grxth\") pod \"176de6be-8d2d-455b-ae9b-e09bae723cb5\" (UID: \"176de6be-8d2d-455b-ae9b-e09bae723cb5\") " May 17 00:52:38.341053 kubelet[2445]: I0517 00:52:38.341006 2445 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/176de6be-8d2d-455b-ae9b-e09bae723cb5-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "176de6be-8d2d-455b-ae9b-e09bae723cb5" (UID: "176de6be-8d2d-455b-ae9b-e09bae723cb5"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 17 00:52:38.349310 kubelet[2445]: I0517 00:52:38.349278 2445 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/176de6be-8d2d-455b-ae9b-e09bae723cb5-kube-api-access-grxth" (OuterVolumeSpecName: "kube-api-access-grxth") pod "176de6be-8d2d-455b-ae9b-e09bae723cb5" (UID: "176de6be-8d2d-455b-ae9b-e09bae723cb5"). InnerVolumeSpecName "kube-api-access-grxth". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 17 00:52:38.437718 kubelet[2445]: I0517 00:52:38.437610 2445 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a6d17eff-e44c-499f-9d1f-5bccae4b1278-etc-cni-netd\") pod \"a6d17eff-e44c-499f-9d1f-5bccae4b1278\" (UID: \"a6d17eff-e44c-499f-9d1f-5bccae4b1278\") " May 17 00:52:38.437718 kubelet[2445]: I0517 00:52:38.437659 2445 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a6d17eff-e44c-499f-9d1f-5bccae4b1278-cni-path\") pod \"a6d17eff-e44c-499f-9d1f-5bccae4b1278\" (UID: \"a6d17eff-e44c-499f-9d1f-5bccae4b1278\") " May 17 00:52:38.437718 kubelet[2445]: I0517 00:52:38.437675 2445 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a6d17eff-e44c-499f-9d1f-5bccae4b1278-cilium-cgroup\") pod \"a6d17eff-e44c-499f-9d1f-5bccae4b1278\" (UID: \"a6d17eff-e44c-499f-9d1f-5bccae4b1278\") " May 17 00:52:38.437718 kubelet[2445]: I0517 00:52:38.437696 2445 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a6d17eff-e44c-499f-9d1f-5bccae4b1278-cilium-config-path\") pod \"a6d17eff-e44c-499f-9d1f-5bccae4b1278\" (UID: \"a6d17eff-e44c-499f-9d1f-5bccae4b1278\") " May 17 00:52:38.437909 kubelet[2445]: I0517 00:52:38.437728 2445 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a6d17eff-e44c-499f-9d1f-5bccae4b1278-hostproc\") pod \"a6d17eff-e44c-499f-9d1f-5bccae4b1278\" (UID: \"a6d17eff-e44c-499f-9d1f-5bccae4b1278\") " May 17 00:52:38.437909 kubelet[2445]: I0517 00:52:38.437747 2445 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a6d17eff-e44c-499f-9d1f-5bccae4b1278-host-proc-sys-kernel\") pod \"a6d17eff-e44c-499f-9d1f-5bccae4b1278\" (UID: \"a6d17eff-e44c-499f-9d1f-5bccae4b1278\") " May 17 00:52:38.437909 kubelet[2445]: I0517 00:52:38.437765 2445 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a6d17eff-e44c-499f-9d1f-5bccae4b1278-hubble-tls\") pod \"a6d17eff-e44c-499f-9d1f-5bccae4b1278\" (UID: \"a6d17eff-e44c-499f-9d1f-5bccae4b1278\") " May 17 00:52:38.437909 kubelet[2445]: I0517 00:52:38.437785 2445 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a6d17eff-e44c-499f-9d1f-5bccae4b1278-lib-modules\") pod \"a6d17eff-e44c-499f-9d1f-5bccae4b1278\" (UID: \"a6d17eff-e44c-499f-9d1f-5bccae4b1278\") " May 17 00:52:38.437909 kubelet[2445]: I0517 00:52:38.437833 2445 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a6d17eff-e44c-499f-9d1f-5bccae4b1278-bpf-maps\") pod \"a6d17eff-e44c-499f-9d1f-5bccae4b1278\" (UID: \"a6d17eff-e44c-499f-9d1f-5bccae4b1278\") " May 17 00:52:38.437909 kubelet[2445]: I0517 00:52:38.437852 2445 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a6d17eff-e44c-499f-9d1f-5bccae4b1278-cilium-run\") pod \"a6d17eff-e44c-499f-9d1f-5bccae4b1278\" (UID: \"a6d17eff-e44c-499f-9d1f-5bccae4b1278\") " May 17 00:52:38.438050 kubelet[2445]: I0517 00:52:38.437868 2445 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fl8rl\" (UniqueName: \"kubernetes.io/projected/a6d17eff-e44c-499f-9d1f-5bccae4b1278-kube-api-access-fl8rl\") pod \"a6d17eff-e44c-499f-9d1f-5bccae4b1278\" (UID: \"a6d17eff-e44c-499f-9d1f-5bccae4b1278\") " May 17 00:52:38.438050 kubelet[2445]: I0517 00:52:38.437891 2445 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a6d17eff-e44c-499f-9d1f-5bccae4b1278-host-proc-sys-net\") pod \"a6d17eff-e44c-499f-9d1f-5bccae4b1278\" (UID: \"a6d17eff-e44c-499f-9d1f-5bccae4b1278\") " May 17 00:52:38.438050 kubelet[2445]: I0517 00:52:38.437909 2445 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a6d17eff-e44c-499f-9d1f-5bccae4b1278-xtables-lock\") pod \"a6d17eff-e44c-499f-9d1f-5bccae4b1278\" (UID: \"a6d17eff-e44c-499f-9d1f-5bccae4b1278\") " May 17 00:52:38.438050 kubelet[2445]: I0517 00:52:38.437926 2445 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a6d17eff-e44c-499f-9d1f-5bccae4b1278-clustermesh-secrets\") pod \"a6d17eff-e44c-499f-9d1f-5bccae4b1278\" (UID: \"a6d17eff-e44c-499f-9d1f-5bccae4b1278\") " May 17 00:52:38.438050 kubelet[2445]: I0517 00:52:38.437970 2445 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/176de6be-8d2d-455b-ae9b-e09bae723cb5-cilium-config-path\") on node \"ci-3510.3.7-n-5e40c0776b\" DevicePath \"\"" May 17 00:52:38.438050 kubelet[2445]: I0517 00:52:38.437982 2445 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-grxth\" (UniqueName: \"kubernetes.io/projected/176de6be-8d2d-455b-ae9b-e09bae723cb5-kube-api-access-grxth\") on node \"ci-3510.3.7-n-5e40c0776b\" DevicePath \"\"" May 17 00:52:38.438642 kubelet[2445]: I0517 00:52:38.438608 2445 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a6d17eff-e44c-499f-9d1f-5bccae4b1278-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "a6d17eff-e44c-499f-9d1f-5bccae4b1278" (UID: "a6d17eff-e44c-499f-9d1f-5bccae4b1278"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:52:38.438709 kubelet[2445]: I0517 00:52:38.438666 2445 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a6d17eff-e44c-499f-9d1f-5bccae4b1278-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "a6d17eff-e44c-499f-9d1f-5bccae4b1278" (UID: "a6d17eff-e44c-499f-9d1f-5bccae4b1278"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:52:38.438709 kubelet[2445]: I0517 00:52:38.438684 2445 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a6d17eff-e44c-499f-9d1f-5bccae4b1278-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "a6d17eff-e44c-499f-9d1f-5bccae4b1278" (UID: "a6d17eff-e44c-499f-9d1f-5bccae4b1278"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:52:38.438920 kubelet[2445]: I0517 00:52:38.438891 2445 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a6d17eff-e44c-499f-9d1f-5bccae4b1278-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "a6d17eff-e44c-499f-9d1f-5bccae4b1278" (UID: "a6d17eff-e44c-499f-9d1f-5bccae4b1278"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:52:38.438967 kubelet[2445]: I0517 00:52:38.438920 2445 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a6d17eff-e44c-499f-9d1f-5bccae4b1278-cni-path" (OuterVolumeSpecName: "cni-path") pod "a6d17eff-e44c-499f-9d1f-5bccae4b1278" (UID: "a6d17eff-e44c-499f-9d1f-5bccae4b1278"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:52:38.438967 kubelet[2445]: I0517 00:52:38.438934 2445 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a6d17eff-e44c-499f-9d1f-5bccae4b1278-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "a6d17eff-e44c-499f-9d1f-5bccae4b1278" (UID: "a6d17eff-e44c-499f-9d1f-5bccae4b1278"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:52:38.440933 kubelet[2445]: I0517 00:52:38.440718 2445 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6d17eff-e44c-499f-9d1f-5bccae4b1278-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a6d17eff-e44c-499f-9d1f-5bccae4b1278" (UID: "a6d17eff-e44c-499f-9d1f-5bccae4b1278"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 17 00:52:38.440933 kubelet[2445]: I0517 00:52:38.440762 2445 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a6d17eff-e44c-499f-9d1f-5bccae4b1278-hostproc" (OuterVolumeSpecName: "hostproc") pod "a6d17eff-e44c-499f-9d1f-5bccae4b1278" (UID: "a6d17eff-e44c-499f-9d1f-5bccae4b1278"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:52:38.440933 kubelet[2445]: I0517 00:52:38.440777 2445 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a6d17eff-e44c-499f-9d1f-5bccae4b1278-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "a6d17eff-e44c-499f-9d1f-5bccae4b1278" (UID: "a6d17eff-e44c-499f-9d1f-5bccae4b1278"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:52:38.440933 kubelet[2445]: I0517 00:52:38.440915 2445 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a6d17eff-e44c-499f-9d1f-5bccae4b1278-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "a6d17eff-e44c-499f-9d1f-5bccae4b1278" (UID: "a6d17eff-e44c-499f-9d1f-5bccae4b1278"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:52:38.440933 kubelet[2445]: I0517 00:52:38.440936 2445 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a6d17eff-e44c-499f-9d1f-5bccae4b1278-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "a6d17eff-e44c-499f-9d1f-5bccae4b1278" (UID: "a6d17eff-e44c-499f-9d1f-5bccae4b1278"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:52:38.443249 kubelet[2445]: I0517 00:52:38.443224 2445 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6d17eff-e44c-499f-9d1f-5bccae4b1278-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "a6d17eff-e44c-499f-9d1f-5bccae4b1278" (UID: "a6d17eff-e44c-499f-9d1f-5bccae4b1278"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 17 00:52:38.443575 kubelet[2445]: I0517 00:52:38.443540 2445 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6d17eff-e44c-499f-9d1f-5bccae4b1278-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "a6d17eff-e44c-499f-9d1f-5bccae4b1278" (UID: "a6d17eff-e44c-499f-9d1f-5bccae4b1278"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 17 00:52:38.445389 kubelet[2445]: I0517 00:52:38.445363 2445 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6d17eff-e44c-499f-9d1f-5bccae4b1278-kube-api-access-fl8rl" (OuterVolumeSpecName: "kube-api-access-fl8rl") pod "a6d17eff-e44c-499f-9d1f-5bccae4b1278" (UID: "a6d17eff-e44c-499f-9d1f-5bccae4b1278"). InnerVolumeSpecName "kube-api-access-fl8rl". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 17 00:52:38.538438 kubelet[2445]: I0517 00:52:38.538405 2445 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a6d17eff-e44c-499f-9d1f-5bccae4b1278-xtables-lock\") on node \"ci-3510.3.7-n-5e40c0776b\" DevicePath \"\"" May 17 00:52:38.538600 kubelet[2445]: I0517 00:52:38.538588 2445 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a6d17eff-e44c-499f-9d1f-5bccae4b1278-clustermesh-secrets\") on node \"ci-3510.3.7-n-5e40c0776b\" DevicePath \"\"" May 17 00:52:38.538681 kubelet[2445]: I0517 00:52:38.538670 2445 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a6d17eff-e44c-499f-9d1f-5bccae4b1278-etc-cni-netd\") on node \"ci-3510.3.7-n-5e40c0776b\" DevicePath \"\"" May 17 00:52:38.538756 kubelet[2445]: I0517 00:52:38.538746 2445 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a6d17eff-e44c-499f-9d1f-5bccae4b1278-cni-path\") on node \"ci-3510.3.7-n-5e40c0776b\" DevicePath \"\"" May 17 00:52:38.538818 kubelet[2445]: I0517 00:52:38.538809 2445 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a6d17eff-e44c-499f-9d1f-5bccae4b1278-cilium-cgroup\") on node \"ci-3510.3.7-n-5e40c0776b\" DevicePath \"\"" May 17 00:52:38.538881 kubelet[2445]: I0517 00:52:38.538862 2445 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a6d17eff-e44c-499f-9d1f-5bccae4b1278-cilium-config-path\") on node \"ci-3510.3.7-n-5e40c0776b\" DevicePath \"\"" May 17 00:52:38.538949 kubelet[2445]: I0517 00:52:38.538937 2445 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a6d17eff-e44c-499f-9d1f-5bccae4b1278-hostproc\") on node \"ci-3510.3.7-n-5e40c0776b\" DevicePath \"\"" May 17 00:52:38.539024 kubelet[2445]: I0517 00:52:38.539010 2445 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a6d17eff-e44c-499f-9d1f-5bccae4b1278-host-proc-sys-kernel\") on node \"ci-3510.3.7-n-5e40c0776b\" DevicePath \"\"" May 17 00:52:38.539094 kubelet[2445]: I0517 00:52:38.539083 2445 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a6d17eff-e44c-499f-9d1f-5bccae4b1278-hubble-tls\") on node \"ci-3510.3.7-n-5e40c0776b\" DevicePath \"\"" May 17 00:52:38.539160 kubelet[2445]: I0517 00:52:38.539141 2445 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a6d17eff-e44c-499f-9d1f-5bccae4b1278-lib-modules\") on node \"ci-3510.3.7-n-5e40c0776b\" DevicePath \"\"" May 17 00:52:38.539259 kubelet[2445]: I0517 00:52:38.539248 2445 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a6d17eff-e44c-499f-9d1f-5bccae4b1278-bpf-maps\") on node \"ci-3510.3.7-n-5e40c0776b\" DevicePath \"\"" May 17 00:52:38.539329 kubelet[2445]: I0517 00:52:38.539319 2445 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a6d17eff-e44c-499f-9d1f-5bccae4b1278-cilium-run\") on node \"ci-3510.3.7-n-5e40c0776b\" DevicePath \"\"" May 17 00:52:38.539397 kubelet[2445]: I0517 00:52:38.539384 2445 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fl8rl\" (UniqueName: \"kubernetes.io/projected/a6d17eff-e44c-499f-9d1f-5bccae4b1278-kube-api-access-fl8rl\") on node \"ci-3510.3.7-n-5e40c0776b\" DevicePath \"\"" May 17 00:52:38.539470 kubelet[2445]: I0517 00:52:38.539457 2445 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a6d17eff-e44c-499f-9d1f-5bccae4b1278-host-proc-sys-net\") on node \"ci-3510.3.7-n-5e40c0776b\" DevicePath \"\"" May 17 00:52:38.717692 systemd[1]: Removed slice kubepods-besteffort-pod176de6be_8d2d_455b_ae9b_e09bae723cb5.slice. May 17 00:52:38.719889 systemd[1]: Removed slice kubepods-burstable-poda6d17eff_e44c_499f_9d1f_5bccae4b1278.slice. May 17 00:52:38.719965 systemd[1]: kubepods-burstable-poda6d17eff_e44c_499f_9d1f_5bccae4b1278.slice: Consumed 6.082s CPU time. May 17 00:52:39.054942 kubelet[2445]: I0517 00:52:39.054853 2445 scope.go:117] "RemoveContainer" containerID="99cd91e1d0306d7bb47018f3b544065689bdef2a3d4dba95b496dd4273f77829" May 17 00:52:39.059426 env[1452]: time="2025-05-17T00:52:39.059281063Z" level=info msg="RemoveContainer for \"99cd91e1d0306d7bb47018f3b544065689bdef2a3d4dba95b496dd4273f77829\"" May 17 00:52:39.073333 env[1452]: time="2025-05-17T00:52:39.073142232Z" level=info msg="RemoveContainer for \"99cd91e1d0306d7bb47018f3b544065689bdef2a3d4dba95b496dd4273f77829\" returns successfully" May 17 00:52:39.074075 kubelet[2445]: I0517 00:52:39.074049 2445 scope.go:117] "RemoveContainer" containerID="99cd91e1d0306d7bb47018f3b544065689bdef2a3d4dba95b496dd4273f77829" May 17 00:52:39.074464 env[1452]: time="2025-05-17T00:52:39.074355441Z" level=error msg="ContainerStatus for \"99cd91e1d0306d7bb47018f3b544065689bdef2a3d4dba95b496dd4273f77829\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"99cd91e1d0306d7bb47018f3b544065689bdef2a3d4dba95b496dd4273f77829\": not found" May 17 00:52:39.074699 kubelet[2445]: E0517 00:52:39.074662 2445 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"99cd91e1d0306d7bb47018f3b544065689bdef2a3d4dba95b496dd4273f77829\": not found" containerID="99cd91e1d0306d7bb47018f3b544065689bdef2a3d4dba95b496dd4273f77829" May 17 00:52:39.074817 kubelet[2445]: I0517 00:52:39.074780 2445 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"99cd91e1d0306d7bb47018f3b544065689bdef2a3d4dba95b496dd4273f77829"} err="failed to get container status \"99cd91e1d0306d7bb47018f3b544065689bdef2a3d4dba95b496dd4273f77829\": rpc error: code = NotFound desc = an error occurred when try to find container \"99cd91e1d0306d7bb47018f3b544065689bdef2a3d4dba95b496dd4273f77829\": not found" May 17 00:52:39.074900 kubelet[2445]: I0517 00:52:39.074890 2445 scope.go:117] "RemoveContainer" containerID="4d002eaced0dd14bd094546fd7621467b9a899d24e54feb09e467695ebc58e28" May 17 00:52:39.076395 env[1452]: time="2025-05-17T00:52:39.076145076Z" level=info msg="RemoveContainer for \"4d002eaced0dd14bd094546fd7621467b9a899d24e54feb09e467695ebc58e28\"" May 17 00:52:39.086611 env[1452]: time="2025-05-17T00:52:39.086528173Z" level=info msg="RemoveContainer for \"4d002eaced0dd14bd094546fd7621467b9a899d24e54feb09e467695ebc58e28\" returns successfully" May 17 00:52:39.086801 kubelet[2445]: I0517 00:52:39.086775 2445 scope.go:117] "RemoveContainer" containerID="c1819190fb85e370c2d1ba760860a797232722de69a5d050e10a934bae3483a2" May 17 00:52:39.087717 env[1452]: time="2025-05-17T00:52:39.087688423Z" level=info msg="RemoveContainer for \"c1819190fb85e370c2d1ba760860a797232722de69a5d050e10a934bae3483a2\"" May 17 00:52:39.096889 env[1452]: time="2025-05-17T00:52:39.096846231Z" level=info msg="RemoveContainer for \"c1819190fb85e370c2d1ba760860a797232722de69a5d050e10a934bae3483a2\" returns successfully" May 17 00:52:39.097062 kubelet[2445]: I0517 00:52:39.097037 2445 scope.go:117] "RemoveContainer" containerID="e895950198c88732d5875ac6f48138bf72985d2740bdf6d5b7c008ad88abd3ab" May 17 00:52:39.098121 env[1452]: time="2025-05-17T00:52:39.098086360Z" level=info msg="RemoveContainer for \"e895950198c88732d5875ac6f48138bf72985d2740bdf6d5b7c008ad88abd3ab\"" May 17 00:52:39.106440 env[1452]: time="2025-05-17T00:52:39.106408029Z" level=info msg="RemoveContainer for \"e895950198c88732d5875ac6f48138bf72985d2740bdf6d5b7c008ad88abd3ab\" returns successfully" May 17 00:52:39.106655 kubelet[2445]: I0517 00:52:39.106632 2445 scope.go:117] "RemoveContainer" containerID="101cfcea6cafe1b89fb5df44c9ea9c05e496cf403e19dc7a8345695ea438e26a" May 17 00:52:39.108736 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fe857e360a586dc0e00ecb3734493f58f5e3b04a939d0e96bad6180bdfaadf0a-rootfs.mount: Deactivated successfully. May 17 00:52:39.108827 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-83f1072bf4361dbdd067fa8d5f84d2af555a5b86fc2b41505826d62bc19815d1-rootfs.mount: Deactivated successfully. May 17 00:52:39.108883 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-83f1072bf4361dbdd067fa8d5f84d2af555a5b86fc2b41505826d62bc19815d1-shm.mount: Deactivated successfully. May 17 00:52:39.108935 systemd[1]: var-lib-kubelet-pods-176de6be\x2d8d2d\x2d455b\x2dae9b\x2de09bae723cb5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgrxth.mount: Deactivated successfully. May 17 00:52:39.108985 systemd[1]: var-lib-kubelet-pods-a6d17eff\x2de44c\x2d499f\x2d9d1f\x2d5bccae4b1278-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfl8rl.mount: Deactivated successfully. May 17 00:52:39.109032 systemd[1]: var-lib-kubelet-pods-a6d17eff\x2de44c\x2d499f\x2d9d1f\x2d5bccae4b1278-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 17 00:52:39.109081 systemd[1]: var-lib-kubelet-pods-a6d17eff\x2de44c\x2d499f\x2d9d1f\x2d5bccae4b1278-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 17 00:52:39.110925 env[1452]: time="2025-05-17T00:52:39.110699241Z" level=info msg="RemoveContainer for \"101cfcea6cafe1b89fb5df44c9ea9c05e496cf403e19dc7a8345695ea438e26a\"" May 17 00:52:39.120914 env[1452]: time="2025-05-17T00:52:39.120858263Z" level=info msg="RemoveContainer for \"101cfcea6cafe1b89fb5df44c9ea9c05e496cf403e19dc7a8345695ea438e26a\" returns successfully" May 17 00:52:39.121297 kubelet[2445]: I0517 00:52:39.121280 2445 scope.go:117] "RemoveContainer" containerID="ee022480d52e468911601964aff26f3fd3ff4d561be98c36bc72317df442262d" May 17 00:52:39.122528 env[1452]: time="2025-05-17T00:52:39.122502662Z" level=info msg="RemoveContainer for \"ee022480d52e468911601964aff26f3fd3ff4d561be98c36bc72317df442262d\"" May 17 00:52:39.132724 env[1452]: time="2025-05-17T00:52:39.132694923Z" level=info msg="RemoveContainer for \"ee022480d52e468911601964aff26f3fd3ff4d561be98c36bc72317df442262d\" returns successfully" May 17 00:52:39.133026 kubelet[2445]: I0517 00:52:39.133010 2445 scope.go:117] "RemoveContainer" containerID="4d002eaced0dd14bd094546fd7621467b9a899d24e54feb09e467695ebc58e28" May 17 00:52:39.133336 env[1452]: time="2025-05-17T00:52:39.133287588Z" level=error msg="ContainerStatus for \"4d002eaced0dd14bd094546fd7621467b9a899d24e54feb09e467695ebc58e28\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4d002eaced0dd14bd094546fd7621467b9a899d24e54feb09e467695ebc58e28\": not found" May 17 00:52:39.133620 kubelet[2445]: E0517 00:52:39.133603 2445 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4d002eaced0dd14bd094546fd7621467b9a899d24e54feb09e467695ebc58e28\": not found" containerID="4d002eaced0dd14bd094546fd7621467b9a899d24e54feb09e467695ebc58e28" May 17 00:52:39.133727 kubelet[2445]: I0517 00:52:39.133706 2445 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4d002eaced0dd14bd094546fd7621467b9a899d24e54feb09e467695ebc58e28"} err="failed to get container status \"4d002eaced0dd14bd094546fd7621467b9a899d24e54feb09e467695ebc58e28\": rpc error: code = NotFound desc = an error occurred when try to find container \"4d002eaced0dd14bd094546fd7621467b9a899d24e54feb09e467695ebc58e28\": not found" May 17 00:52:39.133787 kubelet[2445]: I0517 00:52:39.133776 2445 scope.go:117] "RemoveContainer" containerID="c1819190fb85e370c2d1ba760860a797232722de69a5d050e10a934bae3483a2" May 17 00:52:39.134025 env[1452]: time="2025-05-17T00:52:39.133973531Z" level=error msg="ContainerStatus for \"c1819190fb85e370c2d1ba760860a797232722de69a5d050e10a934bae3483a2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c1819190fb85e370c2d1ba760860a797232722de69a5d050e10a934bae3483a2\": not found" May 17 00:52:39.134194 kubelet[2445]: E0517 00:52:39.134152 2445 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c1819190fb85e370c2d1ba760860a797232722de69a5d050e10a934bae3483a2\": not found" containerID="c1819190fb85e370c2d1ba760860a797232722de69a5d050e10a934bae3483a2" May 17 00:52:39.134284 kubelet[2445]: I0517 00:52:39.134266 2445 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c1819190fb85e370c2d1ba760860a797232722de69a5d050e10a934bae3483a2"} err="failed to get container status \"c1819190fb85e370c2d1ba760860a797232722de69a5d050e10a934bae3483a2\": rpc error: code = NotFound desc = an error occurred when try to find container \"c1819190fb85e370c2d1ba760860a797232722de69a5d050e10a934bae3483a2\": not found" May 17 00:52:39.134346 kubelet[2445]: I0517 00:52:39.134335 2445 scope.go:117] "RemoveContainer" containerID="e895950198c88732d5875ac6f48138bf72985d2740bdf6d5b7c008ad88abd3ab" May 17 00:52:39.134573 env[1452]: time="2025-05-17T00:52:39.134533077Z" level=error msg="ContainerStatus for \"e895950198c88732d5875ac6f48138bf72985d2740bdf6d5b7c008ad88abd3ab\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e895950198c88732d5875ac6f48138bf72985d2740bdf6d5b7c008ad88abd3ab\": not found" May 17 00:52:39.134756 kubelet[2445]: E0517 00:52:39.134740 2445 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e895950198c88732d5875ac6f48138bf72985d2740bdf6d5b7c008ad88abd3ab\": not found" containerID="e895950198c88732d5875ac6f48138bf72985d2740bdf6d5b7c008ad88abd3ab" May 17 00:52:39.134847 kubelet[2445]: I0517 00:52:39.134827 2445 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e895950198c88732d5875ac6f48138bf72985d2740bdf6d5b7c008ad88abd3ab"} err="failed to get container status \"e895950198c88732d5875ac6f48138bf72985d2740bdf6d5b7c008ad88abd3ab\": rpc error: code = NotFound desc = an error occurred when try to find container \"e895950198c88732d5875ac6f48138bf72985d2740bdf6d5b7c008ad88abd3ab\": not found" May 17 00:52:39.134908 kubelet[2445]: I0517 00:52:39.134897 2445 scope.go:117] "RemoveContainer" containerID="101cfcea6cafe1b89fb5df44c9ea9c05e496cf403e19dc7a8345695ea438e26a" May 17 00:52:39.135127 env[1452]: time="2025-05-17T00:52:39.135088343Z" level=error msg="ContainerStatus for \"101cfcea6cafe1b89fb5df44c9ea9c05e496cf403e19dc7a8345695ea438e26a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"101cfcea6cafe1b89fb5df44c9ea9c05e496cf403e19dc7a8345695ea438e26a\": not found" May 17 00:52:39.135403 kubelet[2445]: E0517 00:52:39.135386 2445 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"101cfcea6cafe1b89fb5df44c9ea9c05e496cf403e19dc7a8345695ea438e26a\": not found" containerID="101cfcea6cafe1b89fb5df44c9ea9c05e496cf403e19dc7a8345695ea438e26a" May 17 00:52:39.135522 kubelet[2445]: I0517 00:52:39.135503 2445 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"101cfcea6cafe1b89fb5df44c9ea9c05e496cf403e19dc7a8345695ea438e26a"} err="failed to get container status \"101cfcea6cafe1b89fb5df44c9ea9c05e496cf403e19dc7a8345695ea438e26a\": rpc error: code = NotFound desc = an error occurred when try to find container \"101cfcea6cafe1b89fb5df44c9ea9c05e496cf403e19dc7a8345695ea438e26a\": not found" May 17 00:52:39.135600 kubelet[2445]: I0517 00:52:39.135587 2445 scope.go:117] "RemoveContainer" containerID="ee022480d52e468911601964aff26f3fd3ff4d561be98c36bc72317df442262d" May 17 00:52:39.136089 env[1452]: time="2025-05-17T00:52:39.136031239Z" level=error msg="ContainerStatus for \"ee022480d52e468911601964aff26f3fd3ff4d561be98c36bc72317df442262d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ee022480d52e468911601964aff26f3fd3ff4d561be98c36bc72317df442262d\": not found" May 17 00:52:39.136308 kubelet[2445]: E0517 00:52:39.136291 2445 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ee022480d52e468911601964aff26f3fd3ff4d561be98c36bc72317df442262d\": not found" containerID="ee022480d52e468911601964aff26f3fd3ff4d561be98c36bc72317df442262d" May 17 00:52:39.136407 kubelet[2445]: I0517 00:52:39.136391 2445 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ee022480d52e468911601964aff26f3fd3ff4d561be98c36bc72317df442262d"} err="failed to get container status \"ee022480d52e468911601964aff26f3fd3ff4d561be98c36bc72317df442262d\": rpc error: code = NotFound desc = an error occurred when try to find container \"ee022480d52e468911601964aff26f3fd3ff4d561be98c36bc72317df442262d\": not found" May 17 00:52:40.120483 sshd[3988]: pam_unix(sshd:session): session closed for user core May 17 00:52:40.123252 systemd[1]: sshd@19-10.200.20.19:22-10.200.16.10:42970.service: Deactivated successfully. May 17 00:52:40.123934 systemd[1]: session-22.scope: Deactivated successfully. May 17 00:52:40.124084 systemd[1]: session-22.scope: Consumed 2.085s CPU time. May 17 00:52:40.124500 systemd-logind[1440]: Session 22 logged out. Waiting for processes to exit. May 17 00:52:40.125525 systemd-logind[1440]: Removed session 22. May 17 00:52:40.194785 systemd[1]: Started sshd@20-10.200.20.19:22-10.200.16.10:33352.service. May 17 00:52:40.650969 sshd[4148]: Accepted publickey for core from 10.200.16.10 port 33352 ssh2: RSA SHA256:kTalk4vvVOHJD+odK+kI4Z4CxTmNI3TSVyFiPn8PnHg May 17 00:52:40.652358 sshd[4148]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:52:40.656814 systemd[1]: Started session-23.scope. May 17 00:52:40.657834 systemd-logind[1440]: New session 23 of user core. May 17 00:52:40.701315 env[1452]: time="2025-05-17T00:52:40.701127919Z" level=info msg="StopPodSandbox for \"83f1072bf4361dbdd067fa8d5f84d2af555a5b86fc2b41505826d62bc19815d1\"" May 17 00:52:40.701315 env[1452]: time="2025-05-17T00:52:40.701224237Z" level=info msg="TearDown network for sandbox \"83f1072bf4361dbdd067fa8d5f84d2af555a5b86fc2b41505826d62bc19815d1\" successfully" May 17 00:52:40.701315 env[1452]: time="2025-05-17T00:52:40.701257276Z" level=info msg="StopPodSandbox for \"83f1072bf4361dbdd067fa8d5f84d2af555a5b86fc2b41505826d62bc19815d1\" returns successfully" May 17 00:52:40.702194 env[1452]: time="2025-05-17T00:52:40.702119454Z" level=info msg="RemovePodSandbox for \"83f1072bf4361dbdd067fa8d5f84d2af555a5b86fc2b41505826d62bc19815d1\"" May 17 00:52:40.703323 env[1452]: time="2025-05-17T00:52:40.702149173Z" level=info msg="Forcibly stopping sandbox \"83f1072bf4361dbdd067fa8d5f84d2af555a5b86fc2b41505826d62bc19815d1\"" May 17 00:52:40.703323 env[1452]: time="2025-05-17T00:52:40.702332249Z" level=info msg="TearDown network for sandbox \"83f1072bf4361dbdd067fa8d5f84d2af555a5b86fc2b41505826d62bc19815d1\" successfully" May 17 00:52:40.709575 env[1452]: time="2025-05-17T00:52:40.709547549Z" level=info msg="RemovePodSandbox \"83f1072bf4361dbdd067fa8d5f84d2af555a5b86fc2b41505826d62bc19815d1\" returns successfully" May 17 00:52:40.710265 env[1452]: time="2025-05-17T00:52:40.710228812Z" level=info msg="StopPodSandbox for \"fe857e360a586dc0e00ecb3734493f58f5e3b04a939d0e96bad6180bdfaadf0a\"" May 17 00:52:40.710338 env[1452]: time="2025-05-17T00:52:40.710302890Z" level=info msg="TearDown network for sandbox \"fe857e360a586dc0e00ecb3734493f58f5e3b04a939d0e96bad6180bdfaadf0a\" successfully" May 17 00:52:40.710338 env[1452]: time="2025-05-17T00:52:40.710331089Z" level=info msg="StopPodSandbox for \"fe857e360a586dc0e00ecb3734493f58f5e3b04a939d0e96bad6180bdfaadf0a\" returns successfully" May 17 00:52:40.710939 env[1452]: time="2025-05-17T00:52:40.710912914Z" level=info msg="RemovePodSandbox for \"fe857e360a586dc0e00ecb3734493f58f5e3b04a939d0e96bad6180bdfaadf0a\"" May 17 00:52:40.711045 env[1452]: time="2025-05-17T00:52:40.711014912Z" level=info msg="Forcibly stopping sandbox \"fe857e360a586dc0e00ecb3734493f58f5e3b04a939d0e96bad6180bdfaadf0a\"" May 17 00:52:40.711163 env[1452]: time="2025-05-17T00:52:40.711146989Z" level=info msg="TearDown network for sandbox \"fe857e360a586dc0e00ecb3734493f58f5e3b04a939d0e96bad6180bdfaadf0a\" successfully" May 17 00:52:40.712982 kubelet[2445]: I0517 00:52:40.712945 2445 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="176de6be-8d2d-455b-ae9b-e09bae723cb5" path="/var/lib/kubelet/pods/176de6be-8d2d-455b-ae9b-e09bae723cb5/volumes" May 17 00:52:40.713532 kubelet[2445]: I0517 00:52:40.713511 2445 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6d17eff-e44c-499f-9d1f-5bccae4b1278" path="/var/lib/kubelet/pods/a6d17eff-e44c-499f-9d1f-5bccae4b1278/volumes" May 17 00:52:40.721253 env[1452]: time="2025-05-17T00:52:40.721221777Z" level=info msg="RemovePodSandbox \"fe857e360a586dc0e00ecb3734493f58f5e3b04a939d0e96bad6180bdfaadf0a\" returns successfully" May 17 00:52:40.793433 kubelet[2445]: E0517 00:52:40.793399 2445 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 17 00:52:41.796051 systemd[1]: Created slice kubepods-burstable-pod70a28cb8_cc44_4420_9a90_072a2856f7ad.slice. May 17 00:52:41.845030 sshd[4148]: pam_unix(sshd:session): session closed for user core May 17 00:52:41.848272 systemd[1]: sshd@20-10.200.20.19:22-10.200.16.10:33352.service: Deactivated successfully. May 17 00:52:41.848971 systemd[1]: session-23.scope: Deactivated successfully. May 17 00:52:41.849936 systemd-logind[1440]: Session 23 logged out. Waiting for processes to exit. May 17 00:52:41.850721 systemd-logind[1440]: Removed session 23. May 17 00:52:41.857070 kubelet[2445]: I0517 00:52:41.857037 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/70a28cb8-cc44-4420-9a90-072a2856f7ad-cilium-cgroup\") pod \"cilium-2nsxf\" (UID: \"70a28cb8-cc44-4420-9a90-072a2856f7ad\") " pod="kube-system/cilium-2nsxf" May 17 00:52:41.857431 kubelet[2445]: I0517 00:52:41.857410 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sk8rg\" (UniqueName: \"kubernetes.io/projected/70a28cb8-cc44-4420-9a90-072a2856f7ad-kube-api-access-sk8rg\") pod \"cilium-2nsxf\" (UID: \"70a28cb8-cc44-4420-9a90-072a2856f7ad\") " pod="kube-system/cilium-2nsxf" May 17 00:52:41.857527 kubelet[2445]: I0517 00:52:41.857509 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/70a28cb8-cc44-4420-9a90-072a2856f7ad-bpf-maps\") pod \"cilium-2nsxf\" (UID: \"70a28cb8-cc44-4420-9a90-072a2856f7ad\") " pod="kube-system/cilium-2nsxf" May 17 00:52:41.857599 kubelet[2445]: I0517 00:52:41.857587 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/70a28cb8-cc44-4420-9a90-072a2856f7ad-etc-cni-netd\") pod \"cilium-2nsxf\" (UID: \"70a28cb8-cc44-4420-9a90-072a2856f7ad\") " pod="kube-system/cilium-2nsxf" May 17 00:52:41.857675 kubelet[2445]: I0517 00:52:41.857661 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/70a28cb8-cc44-4420-9a90-072a2856f7ad-lib-modules\") pod \"cilium-2nsxf\" (UID: \"70a28cb8-cc44-4420-9a90-072a2856f7ad\") " pod="kube-system/cilium-2nsxf" May 17 00:52:41.857743 kubelet[2445]: I0517 00:52:41.857731 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/70a28cb8-cc44-4420-9a90-072a2856f7ad-xtables-lock\") pod \"cilium-2nsxf\" (UID: \"70a28cb8-cc44-4420-9a90-072a2856f7ad\") " pod="kube-system/cilium-2nsxf" May 17 00:52:41.857812 kubelet[2445]: I0517 00:52:41.857801 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/70a28cb8-cc44-4420-9a90-072a2856f7ad-hubble-tls\") pod \"cilium-2nsxf\" (UID: \"70a28cb8-cc44-4420-9a90-072a2856f7ad\") " pod="kube-system/cilium-2nsxf" May 17 00:52:41.857889 kubelet[2445]: I0517 00:52:41.857877 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/70a28cb8-cc44-4420-9a90-072a2856f7ad-cni-path\") pod \"cilium-2nsxf\" (UID: \"70a28cb8-cc44-4420-9a90-072a2856f7ad\") " pod="kube-system/cilium-2nsxf" May 17 00:52:41.857963 kubelet[2445]: I0517 00:52:41.857951 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/70a28cb8-cc44-4420-9a90-072a2856f7ad-host-proc-sys-kernel\") pod \"cilium-2nsxf\" (UID: \"70a28cb8-cc44-4420-9a90-072a2856f7ad\") " pod="kube-system/cilium-2nsxf" May 17 00:52:41.858044 kubelet[2445]: I0517 00:52:41.858030 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/70a28cb8-cc44-4420-9a90-072a2856f7ad-hostproc\") pod \"cilium-2nsxf\" (UID: \"70a28cb8-cc44-4420-9a90-072a2856f7ad\") " pod="kube-system/cilium-2nsxf" May 17 00:52:41.858122 kubelet[2445]: I0517 00:52:41.858111 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/70a28cb8-cc44-4420-9a90-072a2856f7ad-cilium-run\") pod \"cilium-2nsxf\" (UID: \"70a28cb8-cc44-4420-9a90-072a2856f7ad\") " pod="kube-system/cilium-2nsxf" May 17 00:52:41.858214 kubelet[2445]: I0517 00:52:41.858201 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/70a28cb8-cc44-4420-9a90-072a2856f7ad-clustermesh-secrets\") pod \"cilium-2nsxf\" (UID: \"70a28cb8-cc44-4420-9a90-072a2856f7ad\") " pod="kube-system/cilium-2nsxf" May 17 00:52:41.858376 kubelet[2445]: I0517 00:52:41.858300 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/70a28cb8-cc44-4420-9a90-072a2856f7ad-host-proc-sys-net\") pod \"cilium-2nsxf\" (UID: \"70a28cb8-cc44-4420-9a90-072a2856f7ad\") " pod="kube-system/cilium-2nsxf" May 17 00:52:41.858376 kubelet[2445]: I0517 00:52:41.858369 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/70a28cb8-cc44-4420-9a90-072a2856f7ad-cilium-config-path\") pod \"cilium-2nsxf\" (UID: \"70a28cb8-cc44-4420-9a90-072a2856f7ad\") " pod="kube-system/cilium-2nsxf" May 17 00:52:41.858453 kubelet[2445]: I0517 00:52:41.858395 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/70a28cb8-cc44-4420-9a90-072a2856f7ad-cilium-ipsec-secrets\") pod \"cilium-2nsxf\" (UID: \"70a28cb8-cc44-4420-9a90-072a2856f7ad\") " pod="kube-system/cilium-2nsxf" May 17 00:52:41.925663 systemd[1]: Started sshd@21-10.200.20.19:22-10.200.16.10:33366.service. May 17 00:52:42.099369 env[1452]: time="2025-05-17T00:52:42.099319437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2nsxf,Uid:70a28cb8-cc44-4420-9a90-072a2856f7ad,Namespace:kube-system,Attempt:0,}" May 17 00:52:42.129734 env[1452]: time="2025-05-17T00:52:42.129648140Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:52:42.129979 env[1452]: time="2025-05-17T00:52:42.129939693Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:52:42.130090 env[1452]: time="2025-05-17T00:52:42.130069090Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:52:42.130459 env[1452]: time="2025-05-17T00:52:42.130385402Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/23a17805aabbf472b108601921479b6946f64b15c168815c6b987dfa8809ab27 pid=4175 runtime=io.containerd.runc.v2 May 17 00:52:42.141692 systemd[1]: Started cri-containerd-23a17805aabbf472b108601921479b6946f64b15c168815c6b987dfa8809ab27.scope. May 17 00:52:42.167129 env[1452]: time="2025-05-17T00:52:42.167093870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2nsxf,Uid:70a28cb8-cc44-4420-9a90-072a2856f7ad,Namespace:kube-system,Attempt:0,} returns sandbox id \"23a17805aabbf472b108601921479b6946f64b15c168815c6b987dfa8809ab27\"" May 17 00:52:42.175024 env[1452]: time="2025-05-17T00:52:42.174990518Z" level=info msg="CreateContainer within sandbox \"23a17805aabbf472b108601921479b6946f64b15c168815c6b987dfa8809ab27\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 17 00:52:42.205449 env[1452]: time="2025-05-17T00:52:42.205404979Z" level=info msg="CreateContainer within sandbox \"23a17805aabbf472b108601921479b6946f64b15c168815c6b987dfa8809ab27\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c73246c5e25d61bcca6a609eda36b59cdd14fe451a167f273a69fa12c072b37e\"" May 17 00:52:42.205851 env[1452]: time="2025-05-17T00:52:42.205827289Z" level=info msg="StartContainer for \"c73246c5e25d61bcca6a609eda36b59cdd14fe451a167f273a69fa12c072b37e\"" May 17 00:52:42.222408 systemd[1]: Started cri-containerd-c73246c5e25d61bcca6a609eda36b59cdd14fe451a167f273a69fa12c072b37e.scope. May 17 00:52:42.232761 systemd[1]: cri-containerd-c73246c5e25d61bcca6a609eda36b59cdd14fe451a167f273a69fa12c072b37e.scope: Deactivated successfully. May 17 00:52:42.233014 systemd[1]: Stopped cri-containerd-c73246c5e25d61bcca6a609eda36b59cdd14fe451a167f273a69fa12c072b37e.scope. May 17 00:52:42.284728 env[1452]: time="2025-05-17T00:52:42.284673413Z" level=info msg="shim disconnected" id=c73246c5e25d61bcca6a609eda36b59cdd14fe451a167f273a69fa12c072b37e May 17 00:52:42.284728 env[1452]: time="2025-05-17T00:52:42.284727051Z" level=warning msg="cleaning up after shim disconnected" id=c73246c5e25d61bcca6a609eda36b59cdd14fe451a167f273a69fa12c072b37e namespace=k8s.io May 17 00:52:42.284728 env[1452]: time="2025-05-17T00:52:42.284736731Z" level=info msg="cleaning up dead shim" May 17 00:52:42.291223 env[1452]: time="2025-05-17T00:52:42.291148055Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:52:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4233 runtime=io.containerd.runc.v2\ntime=\"2025-05-17T00:52:42Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/c73246c5e25d61bcca6a609eda36b59cdd14fe451a167f273a69fa12c072b37e/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" May 17 00:52:42.291514 env[1452]: time="2025-05-17T00:52:42.291416249Z" level=error msg="copy shim log" error="read /proc/self/fd/34: file already closed" May 17 00:52:42.291767 env[1452]: time="2025-05-17T00:52:42.291730601Z" level=error msg="Failed to pipe stdout of container \"c73246c5e25d61bcca6a609eda36b59cdd14fe451a167f273a69fa12c072b37e\"" error="reading from a closed fifo" May 17 00:52:42.293205 env[1452]: time="2025-05-17T00:52:42.292212269Z" level=error msg="Failed to pipe stderr of container \"c73246c5e25d61bcca6a609eda36b59cdd14fe451a167f273a69fa12c072b37e\"" error="reading from a closed fifo" May 17 00:52:42.296250 env[1452]: time="2025-05-17T00:52:42.296207132Z" level=error msg="StartContainer for \"c73246c5e25d61bcca6a609eda36b59cdd14fe451a167f273a69fa12c072b37e\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" May 17 00:52:42.296874 kubelet[2445]: E0517 00:52:42.296503 2445 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="c73246c5e25d61bcca6a609eda36b59cdd14fe451a167f273a69fa12c072b37e" May 17 00:52:42.296874 kubelet[2445]: E0517 00:52:42.296649 2445 kuberuntime_manager.go:1358] "Unhandled Error" err=< May 17 00:52:42.296874 kubelet[2445]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; May 17 00:52:42.296874 kubelet[2445]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; May 17 00:52:42.296874 kubelet[2445]: rm /hostbin/cilium-mount May 17 00:52:42.297062 kubelet[2445]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sk8rg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-2nsxf_kube-system(70a28cb8-cc44-4420-9a90-072a2856f7ad): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown May 17 00:52:42.297062 kubelet[2445]: > logger="UnhandledError" May 17 00:52:42.298038 kubelet[2445]: E0517 00:52:42.297985 2445 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-2nsxf" podUID="70a28cb8-cc44-4420-9a90-072a2856f7ad" May 17 00:52:42.408384 sshd[4161]: Accepted publickey for core from 10.200.16.10 port 33366 ssh2: RSA SHA256:kTalk4vvVOHJD+odK+kI4Z4CxTmNI3TSVyFiPn8PnHg May 17 00:52:42.410274 sshd[4161]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:52:42.414794 systemd[1]: Started session-24.scope. May 17 00:52:42.415741 systemd-logind[1440]: New session 24 of user core. May 17 00:52:42.844020 sshd[4161]: pam_unix(sshd:session): session closed for user core May 17 00:52:42.846760 systemd-logind[1440]: Session 24 logged out. Waiting for processes to exit. May 17 00:52:42.846931 systemd[1]: sshd@21-10.200.20.19:22-10.200.16.10:33366.service: Deactivated successfully. May 17 00:52:42.847644 systemd[1]: session-24.scope: Deactivated successfully. May 17 00:52:42.848323 systemd-logind[1440]: Removed session 24. May 17 00:52:42.923810 systemd[1]: Started sshd@22-10.200.20.19:22-10.200.16.10:33374.service. May 17 00:52:43.074210 env[1452]: time="2025-05-17T00:52:43.071306558Z" level=info msg="StopPodSandbox for \"23a17805aabbf472b108601921479b6946f64b15c168815c6b987dfa8809ab27\"" May 17 00:52:43.074210 env[1452]: time="2025-05-17T00:52:43.071378916Z" level=info msg="Container to stop \"c73246c5e25d61bcca6a609eda36b59cdd14fe451a167f273a69fa12c072b37e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:52:43.073765 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-23a17805aabbf472b108601921479b6946f64b15c168815c6b987dfa8809ab27-shm.mount: Deactivated successfully. May 17 00:52:43.080872 systemd[1]: cri-containerd-23a17805aabbf472b108601921479b6946f64b15c168815c6b987dfa8809ab27.scope: Deactivated successfully. May 17 00:52:43.100718 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-23a17805aabbf472b108601921479b6946f64b15c168815c6b987dfa8809ab27-rootfs.mount: Deactivated successfully. May 17 00:52:43.121329 env[1452]: time="2025-05-17T00:52:43.121279440Z" level=info msg="shim disconnected" id=23a17805aabbf472b108601921479b6946f64b15c168815c6b987dfa8809ab27 May 17 00:52:43.121329 env[1452]: time="2025-05-17T00:52:43.121328038Z" level=warning msg="cleaning up after shim disconnected" id=23a17805aabbf472b108601921479b6946f64b15c168815c6b987dfa8809ab27 namespace=k8s.io May 17 00:52:43.121681 env[1452]: time="2025-05-17T00:52:43.121337118Z" level=info msg="cleaning up dead shim" May 17 00:52:43.128667 env[1452]: time="2025-05-17T00:52:43.128626744Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:52:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4276 runtime=io.containerd.runc.v2\n" May 17 00:52:43.128948 env[1452]: time="2025-05-17T00:52:43.128923216Z" level=info msg="TearDown network for sandbox \"23a17805aabbf472b108601921479b6946f64b15c168815c6b987dfa8809ab27\" successfully" May 17 00:52:43.128998 env[1452]: time="2025-05-17T00:52:43.128947696Z" level=info msg="StopPodSandbox for \"23a17805aabbf472b108601921479b6946f64b15c168815c6b987dfa8809ab27\" returns successfully" May 17 00:52:43.268715 kubelet[2445]: I0517 00:52:43.268662 2445 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/70a28cb8-cc44-4420-9a90-072a2856f7ad-cilium-cgroup\") pod \"70a28cb8-cc44-4420-9a90-072a2856f7ad\" (UID: \"70a28cb8-cc44-4420-9a90-072a2856f7ad\") " May 17 00:52:43.269126 kubelet[2445]: I0517 00:52:43.268723 2445 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/70a28cb8-cc44-4420-9a90-072a2856f7ad-host-proc-sys-kernel\") pod \"70a28cb8-cc44-4420-9a90-072a2856f7ad\" (UID: \"70a28cb8-cc44-4420-9a90-072a2856f7ad\") " May 17 00:52:43.269126 kubelet[2445]: I0517 00:52:43.268766 2445 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/70a28cb8-cc44-4420-9a90-072a2856f7ad-cilium-ipsec-secrets\") pod \"70a28cb8-cc44-4420-9a90-072a2856f7ad\" (UID: \"70a28cb8-cc44-4420-9a90-072a2856f7ad\") " May 17 00:52:43.269126 kubelet[2445]: I0517 00:52:43.268782 2445 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/70a28cb8-cc44-4420-9a90-072a2856f7ad-host-proc-sys-net\") pod \"70a28cb8-cc44-4420-9a90-072a2856f7ad\" (UID: \"70a28cb8-cc44-4420-9a90-072a2856f7ad\") " May 17 00:52:43.269126 kubelet[2445]: I0517 00:52:43.268799 2445 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sk8rg\" (UniqueName: \"kubernetes.io/projected/70a28cb8-cc44-4420-9a90-072a2856f7ad-kube-api-access-sk8rg\") pod \"70a28cb8-cc44-4420-9a90-072a2856f7ad\" (UID: \"70a28cb8-cc44-4420-9a90-072a2856f7ad\") " May 17 00:52:43.269126 kubelet[2445]: I0517 00:52:43.268812 2445 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/70a28cb8-cc44-4420-9a90-072a2856f7ad-lib-modules\") pod \"70a28cb8-cc44-4420-9a90-072a2856f7ad\" (UID: \"70a28cb8-cc44-4420-9a90-072a2856f7ad\") " May 17 00:52:43.269126 kubelet[2445]: I0517 00:52:43.268825 2445 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/70a28cb8-cc44-4420-9a90-072a2856f7ad-xtables-lock\") pod \"70a28cb8-cc44-4420-9a90-072a2856f7ad\" (UID: \"70a28cb8-cc44-4420-9a90-072a2856f7ad\") " May 17 00:52:43.269308 kubelet[2445]: I0517 00:52:43.268839 2445 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/70a28cb8-cc44-4420-9a90-072a2856f7ad-bpf-maps\") pod \"70a28cb8-cc44-4420-9a90-072a2856f7ad\" (UID: \"70a28cb8-cc44-4420-9a90-072a2856f7ad\") " May 17 00:52:43.269308 kubelet[2445]: I0517 00:52:43.268853 2445 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/70a28cb8-cc44-4420-9a90-072a2856f7ad-cilium-run\") pod \"70a28cb8-cc44-4420-9a90-072a2856f7ad\" (UID: \"70a28cb8-cc44-4420-9a90-072a2856f7ad\") " May 17 00:52:43.269308 kubelet[2445]: I0517 00:52:43.268870 2445 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/70a28cb8-cc44-4420-9a90-072a2856f7ad-hubble-tls\") pod \"70a28cb8-cc44-4420-9a90-072a2856f7ad\" (UID: \"70a28cb8-cc44-4420-9a90-072a2856f7ad\") " May 17 00:52:43.269308 kubelet[2445]: I0517 00:52:43.268883 2445 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/70a28cb8-cc44-4420-9a90-072a2856f7ad-cni-path\") pod \"70a28cb8-cc44-4420-9a90-072a2856f7ad\" (UID: \"70a28cb8-cc44-4420-9a90-072a2856f7ad\") " May 17 00:52:43.269308 kubelet[2445]: I0517 00:52:43.268897 2445 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/70a28cb8-cc44-4420-9a90-072a2856f7ad-clustermesh-secrets\") pod \"70a28cb8-cc44-4420-9a90-072a2856f7ad\" (UID: \"70a28cb8-cc44-4420-9a90-072a2856f7ad\") " May 17 00:52:43.269308 kubelet[2445]: I0517 00:52:43.268914 2445 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/70a28cb8-cc44-4420-9a90-072a2856f7ad-cilium-config-path\") pod \"70a28cb8-cc44-4420-9a90-072a2856f7ad\" (UID: \"70a28cb8-cc44-4420-9a90-072a2856f7ad\") " May 17 00:52:43.269440 kubelet[2445]: I0517 00:52:43.268927 2445 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/70a28cb8-cc44-4420-9a90-072a2856f7ad-etc-cni-netd\") pod \"70a28cb8-cc44-4420-9a90-072a2856f7ad\" (UID: \"70a28cb8-cc44-4420-9a90-072a2856f7ad\") " May 17 00:52:43.269440 kubelet[2445]: I0517 00:52:43.268941 2445 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/70a28cb8-cc44-4420-9a90-072a2856f7ad-hostproc\") pod \"70a28cb8-cc44-4420-9a90-072a2856f7ad\" (UID: \"70a28cb8-cc44-4420-9a90-072a2856f7ad\") " May 17 00:52:43.269440 kubelet[2445]: I0517 00:52:43.269003 2445 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70a28cb8-cc44-4420-9a90-072a2856f7ad-hostproc" (OuterVolumeSpecName: "hostproc") pod "70a28cb8-cc44-4420-9a90-072a2856f7ad" (UID: "70a28cb8-cc44-4420-9a90-072a2856f7ad"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:52:43.269440 kubelet[2445]: I0517 00:52:43.269052 2445 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70a28cb8-cc44-4420-9a90-072a2856f7ad-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "70a28cb8-cc44-4420-9a90-072a2856f7ad" (UID: "70a28cb8-cc44-4420-9a90-072a2856f7ad"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:52:43.269440 kubelet[2445]: I0517 00:52:43.269070 2445 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70a28cb8-cc44-4420-9a90-072a2856f7ad-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "70a28cb8-cc44-4420-9a90-072a2856f7ad" (UID: "70a28cb8-cc44-4420-9a90-072a2856f7ad"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:52:43.269606 kubelet[2445]: I0517 00:52:43.269083 2445 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70a28cb8-cc44-4420-9a90-072a2856f7ad-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "70a28cb8-cc44-4420-9a90-072a2856f7ad" (UID: "70a28cb8-cc44-4420-9a90-072a2856f7ad"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:52:43.269606 kubelet[2445]: I0517 00:52:43.269377 2445 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70a28cb8-cc44-4420-9a90-072a2856f7ad-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "70a28cb8-cc44-4420-9a90-072a2856f7ad" (UID: "70a28cb8-cc44-4420-9a90-072a2856f7ad"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:52:43.269606 kubelet[2445]: I0517 00:52:43.269398 2445 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70a28cb8-cc44-4420-9a90-072a2856f7ad-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "70a28cb8-cc44-4420-9a90-072a2856f7ad" (UID: "70a28cb8-cc44-4420-9a90-072a2856f7ad"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:52:43.273380 kubelet[2445]: I0517 00:52:43.272107 2445 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70a28cb8-cc44-4420-9a90-072a2856f7ad-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "70a28cb8-cc44-4420-9a90-072a2856f7ad" (UID: "70a28cb8-cc44-4420-9a90-072a2856f7ad"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 17 00:52:43.273380 kubelet[2445]: I0517 00:52:43.272149 2445 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70a28cb8-cc44-4420-9a90-072a2856f7ad-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "70a28cb8-cc44-4420-9a90-072a2856f7ad" (UID: "70a28cb8-cc44-4420-9a90-072a2856f7ad"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:52:43.274979 systemd[1]: var-lib-kubelet-pods-70a28cb8\x2dcc44\x2d4420\x2d9a90\x2d072a2856f7ad-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 17 00:52:43.275076 systemd[1]: var-lib-kubelet-pods-70a28cb8\x2dcc44\x2d4420\x2d9a90\x2d072a2856f7ad-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. May 17 00:52:43.277395 kubelet[2445]: I0517 00:52:43.277369 2445 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70a28cb8-cc44-4420-9a90-072a2856f7ad-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "70a28cb8-cc44-4420-9a90-072a2856f7ad" (UID: "70a28cb8-cc44-4420-9a90-072a2856f7ad"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 17 00:52:43.277820 kubelet[2445]: I0517 00:52:43.277691 2445 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70a28cb8-cc44-4420-9a90-072a2856f7ad-kube-api-access-sk8rg" (OuterVolumeSpecName: "kube-api-access-sk8rg") pod "70a28cb8-cc44-4420-9a90-072a2856f7ad" (UID: "70a28cb8-cc44-4420-9a90-072a2856f7ad"). InnerVolumeSpecName "kube-api-access-sk8rg". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 17 00:52:43.277902 kubelet[2445]: I0517 00:52:43.277713 2445 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70a28cb8-cc44-4420-9a90-072a2856f7ad-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "70a28cb8-cc44-4420-9a90-072a2856f7ad" (UID: "70a28cb8-cc44-4420-9a90-072a2856f7ad"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:52:43.279834 kubelet[2445]: I0517 00:52:43.279798 2445 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70a28cb8-cc44-4420-9a90-072a2856f7ad-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "70a28cb8-cc44-4420-9a90-072a2856f7ad" (UID: "70a28cb8-cc44-4420-9a90-072a2856f7ad"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 17 00:52:43.279916 kubelet[2445]: I0517 00:52:43.279847 2445 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70a28cb8-cc44-4420-9a90-072a2856f7ad-cni-path" (OuterVolumeSpecName: "cni-path") pod "70a28cb8-cc44-4420-9a90-072a2856f7ad" (UID: "70a28cb8-cc44-4420-9a90-072a2856f7ad"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:52:43.279916 kubelet[2445]: I0517 00:52:43.279864 2445 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70a28cb8-cc44-4420-9a90-072a2856f7ad-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "70a28cb8-cc44-4420-9a90-072a2856f7ad" (UID: "70a28cb8-cc44-4420-9a90-072a2856f7ad"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:52:43.280131 kubelet[2445]: I0517 00:52:43.280111 2445 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70a28cb8-cc44-4420-9a90-072a2856f7ad-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "70a28cb8-cc44-4420-9a90-072a2856f7ad" (UID: "70a28cb8-cc44-4420-9a90-072a2856f7ad"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 17 00:52:43.369370 kubelet[2445]: I0517 00:52:43.369275 2445 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/70a28cb8-cc44-4420-9a90-072a2856f7ad-bpf-maps\") on node \"ci-3510.3.7-n-5e40c0776b\" DevicePath \"\"" May 17 00:52:43.369716 kubelet[2445]: I0517 00:52:43.369494 2445 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/70a28cb8-cc44-4420-9a90-072a2856f7ad-cilium-run\") on node \"ci-3510.3.7-n-5e40c0776b\" DevicePath \"\"" May 17 00:52:43.370141 kubelet[2445]: I0517 00:52:43.370128 2445 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/70a28cb8-cc44-4420-9a90-072a2856f7ad-hubble-tls\") on node \"ci-3510.3.7-n-5e40c0776b\" DevicePath \"\"" May 17 00:52:43.370322 kubelet[2445]: I0517 00:52:43.370309 2445 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/70a28cb8-cc44-4420-9a90-072a2856f7ad-cni-path\") on node \"ci-3510.3.7-n-5e40c0776b\" DevicePath \"\"" May 17 00:52:43.370390 kubelet[2445]: I0517 00:52:43.370380 2445 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/70a28cb8-cc44-4420-9a90-072a2856f7ad-clustermesh-secrets\") on node \"ci-3510.3.7-n-5e40c0776b\" DevicePath \"\"" May 17 00:52:43.370453 kubelet[2445]: I0517 00:52:43.370442 2445 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/70a28cb8-cc44-4420-9a90-072a2856f7ad-cilium-config-path\") on node \"ci-3510.3.7-n-5e40c0776b\" DevicePath \"\"" May 17 00:52:43.370519 kubelet[2445]: I0517 00:52:43.370508 2445 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/70a28cb8-cc44-4420-9a90-072a2856f7ad-etc-cni-netd\") on node \"ci-3510.3.7-n-5e40c0776b\" DevicePath \"\"" May 17 00:52:43.370579 kubelet[2445]: I0517 00:52:43.370569 2445 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/70a28cb8-cc44-4420-9a90-072a2856f7ad-hostproc\") on node \"ci-3510.3.7-n-5e40c0776b\" DevicePath \"\"" May 17 00:52:43.370639 kubelet[2445]: I0517 00:52:43.370629 2445 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/70a28cb8-cc44-4420-9a90-072a2856f7ad-cilium-cgroup\") on node \"ci-3510.3.7-n-5e40c0776b\" DevicePath \"\"" May 17 00:52:43.370699 kubelet[2445]: I0517 00:52:43.370688 2445 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/70a28cb8-cc44-4420-9a90-072a2856f7ad-host-proc-sys-kernel\") on node \"ci-3510.3.7-n-5e40c0776b\" DevicePath \"\"" May 17 00:52:43.370759 kubelet[2445]: I0517 00:52:43.370748 2445 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/70a28cb8-cc44-4420-9a90-072a2856f7ad-cilium-ipsec-secrets\") on node \"ci-3510.3.7-n-5e40c0776b\" DevicePath \"\"" May 17 00:52:43.370823 kubelet[2445]: I0517 00:52:43.370812 2445 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/70a28cb8-cc44-4420-9a90-072a2856f7ad-host-proc-sys-net\") on node \"ci-3510.3.7-n-5e40c0776b\" DevicePath \"\"" May 17 00:52:43.370887 kubelet[2445]: I0517 00:52:43.370876 2445 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sk8rg\" (UniqueName: \"kubernetes.io/projected/70a28cb8-cc44-4420-9a90-072a2856f7ad-kube-api-access-sk8rg\") on node \"ci-3510.3.7-n-5e40c0776b\" DevicePath \"\"" May 17 00:52:43.370948 kubelet[2445]: I0517 00:52:43.370937 2445 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/70a28cb8-cc44-4420-9a90-072a2856f7ad-lib-modules\") on node \"ci-3510.3.7-n-5e40c0776b\" DevicePath \"\"" May 17 00:52:43.371010 kubelet[2445]: I0517 00:52:43.370999 2445 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/70a28cb8-cc44-4420-9a90-072a2856f7ad-xtables-lock\") on node \"ci-3510.3.7-n-5e40c0776b\" DevicePath \"\"" May 17 00:52:43.410435 sshd[4256]: Accepted publickey for core from 10.200.16.10 port 33374 ssh2: RSA SHA256:kTalk4vvVOHJD+odK+kI4Z4CxTmNI3TSVyFiPn8PnHg May 17 00:52:43.411836 sshd[4256]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:52:43.415839 systemd-logind[1440]: New session 25 of user core. May 17 00:52:43.416307 systemd[1]: Started session-25.scope. May 17 00:52:43.965458 systemd[1]: var-lib-kubelet-pods-70a28cb8\x2dcc44\x2d4420\x2d9a90\x2d072a2856f7ad-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsk8rg.mount: Deactivated successfully. May 17 00:52:43.965550 systemd[1]: var-lib-kubelet-pods-70a28cb8\x2dcc44\x2d4420\x2d9a90\x2d072a2856f7ad-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 17 00:52:44.074246 kubelet[2445]: I0517 00:52:44.074222 2445 scope.go:117] "RemoveContainer" containerID="c73246c5e25d61bcca6a609eda36b59cdd14fe451a167f273a69fa12c072b37e" May 17 00:52:44.075597 env[1452]: time="2025-05-17T00:52:44.075327753Z" level=info msg="RemoveContainer for \"c73246c5e25d61bcca6a609eda36b59cdd14fe451a167f273a69fa12c072b37e\"" May 17 00:52:44.079127 systemd[1]: Removed slice kubepods-burstable-pod70a28cb8_cc44_4420_9a90_072a2856f7ad.slice. May 17 00:52:44.087970 env[1452]: time="2025-05-17T00:52:44.087891576Z" level=info msg="RemoveContainer for \"c73246c5e25d61bcca6a609eda36b59cdd14fe451a167f273a69fa12c072b37e\" returns successfully" May 17 00:52:44.146960 systemd[1]: Created slice kubepods-burstable-pod6dbd7bd6_1126_4dfd_9073_bda0ee8106f2.slice. May 17 00:52:44.278124 kubelet[2445]: I0517 00:52:44.278013 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6dbd7bd6-1126-4dfd-9073-bda0ee8106f2-cni-path\") pod \"cilium-pjlx7\" (UID: \"6dbd7bd6-1126-4dfd-9073-bda0ee8106f2\") " pod="kube-system/cilium-pjlx7" May 17 00:52:44.278124 kubelet[2445]: I0517 00:52:44.278053 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6dbd7bd6-1126-4dfd-9073-bda0ee8106f2-cilium-ipsec-secrets\") pod \"cilium-pjlx7\" (UID: \"6dbd7bd6-1126-4dfd-9073-bda0ee8106f2\") " pod="kube-system/cilium-pjlx7" May 17 00:52:44.278124 kubelet[2445]: I0517 00:52:44.278074 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6dbd7bd6-1126-4dfd-9073-bda0ee8106f2-cilium-run\") pod \"cilium-pjlx7\" (UID: \"6dbd7bd6-1126-4dfd-9073-bda0ee8106f2\") " pod="kube-system/cilium-pjlx7" May 17 00:52:44.278124 kubelet[2445]: I0517 00:52:44.278090 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6dbd7bd6-1126-4dfd-9073-bda0ee8106f2-cilium-cgroup\") pod \"cilium-pjlx7\" (UID: \"6dbd7bd6-1126-4dfd-9073-bda0ee8106f2\") " pod="kube-system/cilium-pjlx7" May 17 00:52:44.278124 kubelet[2445]: I0517 00:52:44.278106 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6dbd7bd6-1126-4dfd-9073-bda0ee8106f2-cilium-config-path\") pod \"cilium-pjlx7\" (UID: \"6dbd7bd6-1126-4dfd-9073-bda0ee8106f2\") " pod="kube-system/cilium-pjlx7" May 17 00:52:44.278535 kubelet[2445]: I0517 00:52:44.278136 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6dbd7bd6-1126-4dfd-9073-bda0ee8106f2-host-proc-sys-kernel\") pod \"cilium-pjlx7\" (UID: \"6dbd7bd6-1126-4dfd-9073-bda0ee8106f2\") " pod="kube-system/cilium-pjlx7" May 17 00:52:44.278535 kubelet[2445]: I0517 00:52:44.278152 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6dbd7bd6-1126-4dfd-9073-bda0ee8106f2-etc-cni-netd\") pod \"cilium-pjlx7\" (UID: \"6dbd7bd6-1126-4dfd-9073-bda0ee8106f2\") " pod="kube-system/cilium-pjlx7" May 17 00:52:44.278535 kubelet[2445]: I0517 00:52:44.278166 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6dbd7bd6-1126-4dfd-9073-bda0ee8106f2-host-proc-sys-net\") pod \"cilium-pjlx7\" (UID: \"6dbd7bd6-1126-4dfd-9073-bda0ee8106f2\") " pod="kube-system/cilium-pjlx7" May 17 00:52:44.278535 kubelet[2445]: I0517 00:52:44.278196 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6dbd7bd6-1126-4dfd-9073-bda0ee8106f2-bpf-maps\") pod \"cilium-pjlx7\" (UID: \"6dbd7bd6-1126-4dfd-9073-bda0ee8106f2\") " pod="kube-system/cilium-pjlx7" May 17 00:52:44.278535 kubelet[2445]: I0517 00:52:44.278214 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6dbd7bd6-1126-4dfd-9073-bda0ee8106f2-clustermesh-secrets\") pod \"cilium-pjlx7\" (UID: \"6dbd7bd6-1126-4dfd-9073-bda0ee8106f2\") " pod="kube-system/cilium-pjlx7" May 17 00:52:44.278535 kubelet[2445]: I0517 00:52:44.278229 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6dbd7bd6-1126-4dfd-9073-bda0ee8106f2-hostproc\") pod \"cilium-pjlx7\" (UID: \"6dbd7bd6-1126-4dfd-9073-bda0ee8106f2\") " pod="kube-system/cilium-pjlx7" May 17 00:52:44.278679 kubelet[2445]: I0517 00:52:44.278249 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6dbd7bd6-1126-4dfd-9073-bda0ee8106f2-lib-modules\") pod \"cilium-pjlx7\" (UID: \"6dbd7bd6-1126-4dfd-9073-bda0ee8106f2\") " pod="kube-system/cilium-pjlx7" May 17 00:52:44.278679 kubelet[2445]: I0517 00:52:44.278263 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6dbd7bd6-1126-4dfd-9073-bda0ee8106f2-hubble-tls\") pod \"cilium-pjlx7\" (UID: \"6dbd7bd6-1126-4dfd-9073-bda0ee8106f2\") " pod="kube-system/cilium-pjlx7" May 17 00:52:44.278679 kubelet[2445]: I0517 00:52:44.278278 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6dbd7bd6-1126-4dfd-9073-bda0ee8106f2-xtables-lock\") pod \"cilium-pjlx7\" (UID: \"6dbd7bd6-1126-4dfd-9073-bda0ee8106f2\") " pod="kube-system/cilium-pjlx7" May 17 00:52:44.278679 kubelet[2445]: I0517 00:52:44.278291 2445 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hc2t\" (UniqueName: \"kubernetes.io/projected/6dbd7bd6-1126-4dfd-9073-bda0ee8106f2-kube-api-access-4hc2t\") pod \"cilium-pjlx7\" (UID: \"6dbd7bd6-1126-4dfd-9073-bda0ee8106f2\") " pod="kube-system/cilium-pjlx7" May 17 00:52:44.449425 env[1452]: time="2025-05-17T00:52:44.449352508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pjlx7,Uid:6dbd7bd6-1126-4dfd-9073-bda0ee8106f2,Namespace:kube-system,Attempt:0,}" May 17 00:52:44.501378 kubelet[2445]: I0517 00:52:44.501324 2445 setters.go:618] "Node became not ready" node="ci-3510.3.7-n-5e40c0776b" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-17T00:52:44Z","lastTransitionTime":"2025-05-17T00:52:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 17 00:52:44.516324 env[1452]: time="2025-05-17T00:52:44.516250487Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:52:44.516475 env[1452]: time="2025-05-17T00:52:44.516434922Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:52:44.516568 env[1452]: time="2025-05-17T00:52:44.516463961Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:52:44.516827 env[1452]: time="2025-05-17T00:52:44.516766674Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/cb74b3bab7ca00649ef2c6c8c32b95fe260cb4df02511498ac56d995adbb0168 pid=4311 runtime=io.containerd.runc.v2 May 17 00:52:44.528929 systemd[1]: Started cri-containerd-cb74b3bab7ca00649ef2c6c8c32b95fe260cb4df02511498ac56d995adbb0168.scope. May 17 00:52:44.552912 env[1452]: time="2025-05-17T00:52:44.552876100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pjlx7,Uid:6dbd7bd6-1126-4dfd-9073-bda0ee8106f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"cb74b3bab7ca00649ef2c6c8c32b95fe260cb4df02511498ac56d995adbb0168\"" May 17 00:52:44.561086 env[1452]: time="2025-05-17T00:52:44.561055107Z" level=info msg="CreateContainer within sandbox \"cb74b3bab7ca00649ef2c6c8c32b95fe260cb4df02511498ac56d995adbb0168\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 17 00:52:44.597398 env[1452]: time="2025-05-17T00:52:44.597357529Z" level=info msg="CreateContainer within sandbox \"cb74b3bab7ca00649ef2c6c8c32b95fe260cb4df02511498ac56d995adbb0168\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"67f6caa9c87b62b49fa315e472e8beb3666c81e606b5ee7a9efd38d1e4e85ced\"" May 17 00:52:44.598248 env[1452]: time="2025-05-17T00:52:44.598221748Z" level=info msg="StartContainer for \"67f6caa9c87b62b49fa315e472e8beb3666c81e606b5ee7a9efd38d1e4e85ced\"" May 17 00:52:44.616578 systemd[1]: Started cri-containerd-67f6caa9c87b62b49fa315e472e8beb3666c81e606b5ee7a9efd38d1e4e85ced.scope. May 17 00:52:44.647935 env[1452]: time="2025-05-17T00:52:44.647298108Z" level=info msg="StartContainer for \"67f6caa9c87b62b49fa315e472e8beb3666c81e606b5ee7a9efd38d1e4e85ced\" returns successfully" May 17 00:52:44.657902 systemd[1]: cri-containerd-67f6caa9c87b62b49fa315e472e8beb3666c81e606b5ee7a9efd38d1e4e85ced.scope: Deactivated successfully. May 17 00:52:44.688374 env[1452]: time="2025-05-17T00:52:44.688332177Z" level=info msg="shim disconnected" id=67f6caa9c87b62b49fa315e472e8beb3666c81e606b5ee7a9efd38d1e4e85ced May 17 00:52:44.688581 env[1452]: time="2025-05-17T00:52:44.688564052Z" level=warning msg="cleaning up after shim disconnected" id=67f6caa9c87b62b49fa315e472e8beb3666c81e606b5ee7a9efd38d1e4e85ced namespace=k8s.io May 17 00:52:44.688658 env[1452]: time="2025-05-17T00:52:44.688645650Z" level=info msg="cleaning up dead shim" May 17 00:52:44.695075 env[1452]: time="2025-05-17T00:52:44.695037819Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:52:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4394 runtime=io.containerd.runc.v2\n" May 17 00:52:44.718312 kubelet[2445]: I0517 00:52:44.717971 2445 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70a28cb8-cc44-4420-9a90-072a2856f7ad" path="/var/lib/kubelet/pods/70a28cb8-cc44-4420-9a90-072a2856f7ad/volumes" May 17 00:52:45.084594 env[1452]: time="2025-05-17T00:52:45.084552555Z" level=info msg="CreateContainer within sandbox \"cb74b3bab7ca00649ef2c6c8c32b95fe260cb4df02511498ac56d995adbb0168\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 17 00:52:45.116022 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3948832869.mount: Deactivated successfully. May 17 00:52:45.126655 env[1452]: time="2025-05-17T00:52:45.126613294Z" level=info msg="CreateContainer within sandbox \"cb74b3bab7ca00649ef2c6c8c32b95fe260cb4df02511498ac56d995adbb0168\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e56696dddc066a2d5b4ce789b507abe5d90e1b8239d3ac521e9ad20fe5619fc5\"" May 17 00:52:45.127274 env[1452]: time="2025-05-17T00:52:45.127186840Z" level=info msg="StartContainer for \"e56696dddc066a2d5b4ce789b507abe5d90e1b8239d3ac521e9ad20fe5619fc5\"" May 17 00:52:45.144204 systemd[1]: Started cri-containerd-e56696dddc066a2d5b4ce789b507abe5d90e1b8239d3ac521e9ad20fe5619fc5.scope. May 17 00:52:45.170661 env[1452]: time="2025-05-17T00:52:45.170620667Z" level=info msg="StartContainer for \"e56696dddc066a2d5b4ce789b507abe5d90e1b8239d3ac521e9ad20fe5619fc5\" returns successfully" May 17 00:52:45.175961 systemd[1]: cri-containerd-e56696dddc066a2d5b4ce789b507abe5d90e1b8239d3ac521e9ad20fe5619fc5.scope: Deactivated successfully. May 17 00:52:45.202797 env[1452]: time="2025-05-17T00:52:45.202753518Z" level=info msg="shim disconnected" id=e56696dddc066a2d5b4ce789b507abe5d90e1b8239d3ac521e9ad20fe5619fc5 May 17 00:52:45.202797 env[1452]: time="2025-05-17T00:52:45.202796957Z" level=warning msg="cleaning up after shim disconnected" id=e56696dddc066a2d5b4ce789b507abe5d90e1b8239d3ac521e9ad20fe5619fc5 namespace=k8s.io May 17 00:52:45.202987 env[1452]: time="2025-05-17T00:52:45.202806317Z" level=info msg="cleaning up dead shim" May 17 00:52:45.209404 env[1452]: time="2025-05-17T00:52:45.209368444Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:52:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4457 runtime=io.containerd.runc.v2\n" May 17 00:52:45.390001 kubelet[2445]: W0517 00:52:45.389477 2445 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod70a28cb8_cc44_4420_9a90_072a2856f7ad.slice/cri-containerd-c73246c5e25d61bcca6a609eda36b59cdd14fe451a167f273a69fa12c072b37e.scope WatchSource:0}: container "c73246c5e25d61bcca6a609eda36b59cdd14fe451a167f273a69fa12c072b37e" in namespace "k8s.io": not found May 17 00:52:45.794854 kubelet[2445]: E0517 00:52:45.794733 2445 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 17 00:52:45.965636 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e56696dddc066a2d5b4ce789b507abe5d90e1b8239d3ac521e9ad20fe5619fc5-rootfs.mount: Deactivated successfully. May 17 00:52:46.089701 env[1452]: time="2025-05-17T00:52:46.089651978Z" level=info msg="CreateContainer within sandbox \"cb74b3bab7ca00649ef2c6c8c32b95fe260cb4df02511498ac56d995adbb0168\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 17 00:52:46.118276 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1619444141.mount: Deactivated successfully. May 17 00:52:46.126804 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3457017741.mount: Deactivated successfully. May 17 00:52:46.143587 env[1452]: time="2025-05-17T00:52:46.143549938Z" level=info msg="CreateContainer within sandbox \"cb74b3bab7ca00649ef2c6c8c32b95fe260cb4df02511498ac56d995adbb0168\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"76e1bec0ea60a23a9f6af20c9cd98e51b623ec2eeddd7b91fa7db85fe990e1a8\"" May 17 00:52:46.145332 env[1452]: time="2025-05-17T00:52:46.144634033Z" level=info msg="StartContainer for \"76e1bec0ea60a23a9f6af20c9cd98e51b623ec2eeddd7b91fa7db85fe990e1a8\"" May 17 00:52:46.163450 systemd[1]: Started cri-containerd-76e1bec0ea60a23a9f6af20c9cd98e51b623ec2eeddd7b91fa7db85fe990e1a8.scope. May 17 00:52:46.191955 systemd[1]: cri-containerd-76e1bec0ea60a23a9f6af20c9cd98e51b623ec2eeddd7b91fa7db85fe990e1a8.scope: Deactivated successfully. May 17 00:52:46.193477 env[1452]: time="2025-05-17T00:52:46.193423751Z" level=info msg="StartContainer for \"76e1bec0ea60a23a9f6af20c9cd98e51b623ec2eeddd7b91fa7db85fe990e1a8\" returns successfully" May 17 00:52:46.220109 env[1452]: time="2025-05-17T00:52:46.220062738Z" level=info msg="shim disconnected" id=76e1bec0ea60a23a9f6af20c9cd98e51b623ec2eeddd7b91fa7db85fe990e1a8 May 17 00:52:46.220109 env[1452]: time="2025-05-17T00:52:46.220107257Z" level=warning msg="cleaning up after shim disconnected" id=76e1bec0ea60a23a9f6af20c9cd98e51b623ec2eeddd7b91fa7db85fe990e1a8 namespace=k8s.io May 17 00:52:46.220109 env[1452]: time="2025-05-17T00:52:46.220116976Z" level=info msg="cleaning up dead shim" May 17 00:52:46.226516 env[1452]: time="2025-05-17T00:52:46.226467630Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:52:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4515 runtime=io.containerd.runc.v2\n" May 17 00:52:47.091374 env[1452]: time="2025-05-17T00:52:47.091334838Z" level=info msg="CreateContainer within sandbox \"cb74b3bab7ca00649ef2c6c8c32b95fe260cb4df02511498ac56d995adbb0168\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 17 00:52:47.131878 env[1452]: time="2025-05-17T00:52:47.131790640Z" level=info msg="CreateContainer within sandbox \"cb74b3bab7ca00649ef2c6c8c32b95fe260cb4df02511498ac56d995adbb0168\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c4a1aa5b2f4ac3ab0e1b64e69ed323a374810666f146f2cd0a1107ffa8bfd0d0\"" May 17 00:52:47.133332 env[1452]: time="2025-05-17T00:52:47.132453745Z" level=info msg="StartContainer for \"c4a1aa5b2f4ac3ab0e1b64e69ed323a374810666f146f2cd0a1107ffa8bfd0d0\"" May 17 00:52:47.149053 systemd[1]: Started cri-containerd-c4a1aa5b2f4ac3ab0e1b64e69ed323a374810666f146f2cd0a1107ffa8bfd0d0.scope. May 17 00:52:47.178460 systemd[1]: cri-containerd-c4a1aa5b2f4ac3ab0e1b64e69ed323a374810666f146f2cd0a1107ffa8bfd0d0.scope: Deactivated successfully. May 17 00:52:47.182939 env[1452]: time="2025-05-17T00:52:47.182872240Z" level=info msg="StartContainer for \"c4a1aa5b2f4ac3ab0e1b64e69ed323a374810666f146f2cd0a1107ffa8bfd0d0\" returns successfully" May 17 00:52:47.208523 env[1452]: time="2025-05-17T00:52:47.208479699Z" level=info msg="shim disconnected" id=c4a1aa5b2f4ac3ab0e1b64e69ed323a374810666f146f2cd0a1107ffa8bfd0d0 May 17 00:52:47.208839 env[1452]: time="2025-05-17T00:52:47.208820131Z" level=warning msg="cleaning up after shim disconnected" id=c4a1aa5b2f4ac3ab0e1b64e69ed323a374810666f146f2cd0a1107ffa8bfd0d0 namespace=k8s.io May 17 00:52:47.208912 env[1452]: time="2025-05-17T00:52:47.208899530Z" level=info msg="cleaning up dead shim" May 17 00:52:47.215906 env[1452]: time="2025-05-17T00:52:47.215872131Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:52:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4574 runtime=io.containerd.runc.v2\n" May 17 00:52:47.965755 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c4a1aa5b2f4ac3ab0e1b64e69ed323a374810666f146f2cd0a1107ffa8bfd0d0-rootfs.mount: Deactivated successfully. May 17 00:52:48.095965 env[1452]: time="2025-05-17T00:52:48.095919825Z" level=info msg="CreateContainer within sandbox \"cb74b3bab7ca00649ef2c6c8c32b95fe260cb4df02511498ac56d995adbb0168\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 17 00:52:48.137852 env[1452]: time="2025-05-17T00:52:48.137798127Z" level=info msg="CreateContainer within sandbox \"cb74b3bab7ca00649ef2c6c8c32b95fe260cb4df02511498ac56d995adbb0168\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a2c5500c92743443b52fe200e9bb7a2f69bbd780d0ca88123bd70d0b24350dfc\"" May 17 00:52:48.138788 env[1452]: time="2025-05-17T00:52:48.138751066Z" level=info msg="StartContainer for \"a2c5500c92743443b52fe200e9bb7a2f69bbd780d0ca88123bd70d0b24350dfc\"" May 17 00:52:48.158703 systemd[1]: Started cri-containerd-a2c5500c92743443b52fe200e9bb7a2f69bbd780d0ca88123bd70d0b24350dfc.scope. May 17 00:52:48.184566 env[1452]: time="2025-05-17T00:52:48.184524481Z" level=info msg="StartContainer for \"a2c5500c92743443b52fe200e9bb7a2f69bbd780d0ca88123bd70d0b24350dfc\" returns successfully" May 17 00:52:48.498780 kubelet[2445]: W0517 00:52:48.498735 2445 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6dbd7bd6_1126_4dfd_9073_bda0ee8106f2.slice/cri-containerd-67f6caa9c87b62b49fa315e472e8beb3666c81e606b5ee7a9efd38d1e4e85ced.scope WatchSource:0}: task 67f6caa9c87b62b49fa315e472e8beb3666c81e606b5ee7a9efd38d1e4e85ced not found May 17 00:52:48.516199 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) May 17 00:52:49.113602 kubelet[2445]: I0517 00:52:49.113540 2445 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-pjlx7" podStartSLOduration=5.113506232 podStartE2EDuration="5.113506232s" podCreationTimestamp="2025-05-17 00:52:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:52:49.112615452 +0000 UTC m=+188.512368812" watchObservedRunningTime="2025-05-17 00:52:49.113506232 +0000 UTC m=+188.513259592" May 17 00:52:49.872371 systemd[1]: run-containerd-runc-k8s.io-a2c5500c92743443b52fe200e9bb7a2f69bbd780d0ca88123bd70d0b24350dfc-runc.LITjfp.mount: Deactivated successfully. May 17 00:52:51.227970 systemd-networkd[1623]: lxc_health: Link UP May 17 00:52:51.247372 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 17 00:52:51.247724 systemd-networkd[1623]: lxc_health: Gained carrier May 17 00:52:51.605767 kubelet[2445]: W0517 00:52:51.605726 2445 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6dbd7bd6_1126_4dfd_9073_bda0ee8106f2.slice/cri-containerd-e56696dddc066a2d5b4ce789b507abe5d90e1b8239d3ac521e9ad20fe5619fc5.scope WatchSource:0}: task e56696dddc066a2d5b4ce789b507abe5d90e1b8239d3ac521e9ad20fe5619fc5 not found May 17 00:52:52.387320 systemd-networkd[1623]: lxc_health: Gained IPv6LL May 17 00:52:54.187578 systemd[1]: run-containerd-runc-k8s.io-a2c5500c92743443b52fe200e9bb7a2f69bbd780d0ca88123bd70d0b24350dfc-runc.tXEY5Z.mount: Deactivated successfully. May 17 00:52:54.714436 kubelet[2445]: W0517 00:52:54.714389 2445 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6dbd7bd6_1126_4dfd_9073_bda0ee8106f2.slice/cri-containerd-76e1bec0ea60a23a9f6af20c9cd98e51b623ec2eeddd7b91fa7db85fe990e1a8.scope WatchSource:0}: task 76e1bec0ea60a23a9f6af20c9cd98e51b623ec2eeddd7b91fa7db85fe990e1a8 not found May 17 00:52:56.315374 systemd[1]: run-containerd-runc-k8s.io-a2c5500c92743443b52fe200e9bb7a2f69bbd780d0ca88123bd70d0b24350dfc-runc.9bqs1l.mount: Deactivated successfully. May 17 00:52:57.821423 kubelet[2445]: W0517 00:52:57.821386 2445 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6dbd7bd6_1126_4dfd_9073_bda0ee8106f2.slice/cri-containerd-c4a1aa5b2f4ac3ab0e1b64e69ed323a374810666f146f2cd0a1107ffa8bfd0d0.scope WatchSource:0}: task c4a1aa5b2f4ac3ab0e1b64e69ed323a374810666f146f2cd0a1107ffa8bfd0d0 not found May 17 00:52:58.435716 systemd[1]: run-containerd-runc-k8s.io-a2c5500c92743443b52fe200e9bb7a2f69bbd780d0ca88123bd70d0b24350dfc-runc.HDJOWE.mount: Deactivated successfully. May 17 00:52:58.563329 sshd[4256]: pam_unix(sshd:session): session closed for user core May 17 00:52:58.566078 systemd[1]: sshd@22-10.200.20.19:22-10.200.16.10:33374.service: Deactivated successfully. May 17 00:52:58.566825 systemd[1]: session-25.scope: Deactivated successfully. May 17 00:52:58.567394 systemd-logind[1440]: Session 25 logged out. Waiting for processes to exit. May 17 00:52:58.568219 systemd-logind[1440]: Removed session 25.