Dec 13 13:14:47.348347 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Dec 13 13:14:47.348368 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Fri Dec 13 11:56:07 -00 2024 Dec 13 13:14:47.348376 kernel: KASLR enabled Dec 13 13:14:47.348382 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Dec 13 13:14:47.348389 kernel: printk: bootconsole [pl11] enabled Dec 13 13:14:47.348394 kernel: efi: EFI v2.7 by EDK II Dec 13 13:14:47.348401 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f20e698 RNG=0x3fd5f998 MEMRESERVE=0x3e477598 Dec 13 13:14:47.348407 kernel: random: crng init done Dec 13 13:14:47.348413 kernel: secureboot: Secure boot disabled Dec 13 13:14:47.348418 kernel: ACPI: Early table checksum verification disabled Dec 13 13:14:47.348424 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Dec 13 13:14:47.348430 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 13:14:47.348435 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 13:14:47.348443 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Dec 13 13:14:47.348450 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 13:14:47.348456 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 13:14:47.348462 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 13:14:47.348469 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 13:14:47.348475 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 13:14:47.348481 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 13:14:47.348487 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Dec 13 13:14:47.348493 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 13:14:47.348499 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Dec 13 13:14:47.348505 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Dec 13 13:14:47.348511 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Dec 13 13:14:47.348517 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Dec 13 13:14:47.348523 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Dec 13 13:14:47.348529 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Dec 13 13:14:47.348537 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Dec 13 13:14:47.348543 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Dec 13 13:14:47.348549 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Dec 13 13:14:47.348555 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Dec 13 13:14:47.348561 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Dec 13 13:14:47.348567 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Dec 13 13:14:47.348572 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Dec 13 13:14:47.348578 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Dec 13 13:14:47.348584 kernel: Zone ranges: Dec 13 13:14:47.348590 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Dec 13 13:14:47.348596 kernel: DMA32 empty Dec 13 13:14:47.348603 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Dec 13 13:14:47.348613 kernel: Movable zone start for each node Dec 13 13:14:47.348619 kernel: Early memory node ranges Dec 13 13:14:47.348626 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Dec 13 13:14:47.348632 kernel: node 0: [mem 0x0000000000824000-0x000000003e45ffff] Dec 13 13:14:47.348638 kernel: node 0: [mem 0x000000003e460000-0x000000003e46ffff] Dec 13 13:14:47.348646 kernel: node 0: [mem 0x000000003e470000-0x000000003e54ffff] Dec 13 13:14:47.348653 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Dec 13 13:14:47.348659 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Dec 13 13:14:47.348665 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Dec 13 13:14:47.348671 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Dec 13 13:14:47.348678 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Dec 13 13:14:47.348684 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Dec 13 13:14:47.348691 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Dec 13 13:14:47.348697 kernel: psci: probing for conduit method from ACPI. Dec 13 13:14:47.348704 kernel: psci: PSCIv1.1 detected in firmware. Dec 13 13:14:47.348710 kernel: psci: Using standard PSCI v0.2 function IDs Dec 13 13:14:47.348716 kernel: psci: MIGRATE_INFO_TYPE not supported. Dec 13 13:14:47.349814 kernel: psci: SMC Calling Convention v1.4 Dec 13 13:14:47.349837 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Dec 13 13:14:47.349845 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Dec 13 13:14:47.349851 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Dec 13 13:14:47.349858 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Dec 13 13:14:47.349865 kernel: pcpu-alloc: [0] 0 [0] 1 Dec 13 13:14:47.349871 kernel: Detected PIPT I-cache on CPU0 Dec 13 13:14:47.349878 kernel: CPU features: detected: GIC system register CPU interface Dec 13 13:14:47.349884 kernel: CPU features: detected: Hardware dirty bit management Dec 13 13:14:47.349891 kernel: CPU features: detected: Spectre-BHB Dec 13 13:14:47.349897 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 13 13:14:47.349910 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 13 13:14:47.349916 kernel: CPU features: detected: ARM erratum 1418040 Dec 13 13:14:47.349923 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Dec 13 13:14:47.349929 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 13 13:14:47.349936 kernel: alternatives: applying boot alternatives Dec 13 13:14:47.349944 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=c48af8adabdaf1d8e07ceb011d2665929c607ddf2c4d40203b31334d745cc472 Dec 13 13:14:47.349951 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 13:14:47.349958 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 13:14:47.349964 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 13:14:47.349971 kernel: Fallback order for Node 0: 0 Dec 13 13:14:47.349977 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Dec 13 13:14:47.349985 kernel: Policy zone: Normal Dec 13 13:14:47.349997 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 13:14:47.350003 kernel: software IO TLB: area num 2. Dec 13 13:14:47.350009 kernel: software IO TLB: mapped [mem 0x000000003a460000-0x000000003e460000] (64MB) Dec 13 13:14:47.350016 kernel: Memory: 3982056K/4194160K available (10304K kernel code, 2184K rwdata, 8088K rodata, 39936K init, 897K bss, 212104K reserved, 0K cma-reserved) Dec 13 13:14:47.350023 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 13:14:47.350030 kernel: trace event string verifier disabled Dec 13 13:14:47.350036 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 13:14:47.350043 kernel: rcu: RCU event tracing is enabled. Dec 13 13:14:47.350050 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 13:14:47.350057 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 13:14:47.350065 kernel: Tracing variant of Tasks RCU enabled. Dec 13 13:14:47.350071 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 13:14:47.350078 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 13:14:47.350084 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 13 13:14:47.350091 kernel: GICv3: 960 SPIs implemented Dec 13 13:14:47.350097 kernel: GICv3: 0 Extended SPIs implemented Dec 13 13:14:47.350103 kernel: Root IRQ handler: gic_handle_irq Dec 13 13:14:47.350110 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Dec 13 13:14:47.350117 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Dec 13 13:14:47.350123 kernel: ITS: No ITS available, not enabling LPIs Dec 13 13:14:47.350130 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 13:14:47.350136 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 13:14:47.350145 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Dec 13 13:14:47.350151 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Dec 13 13:14:47.350158 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Dec 13 13:14:47.350164 kernel: Console: colour dummy device 80x25 Dec 13 13:14:47.350171 kernel: printk: console [tty1] enabled Dec 13 13:14:47.350178 kernel: ACPI: Core revision 20230628 Dec 13 13:14:47.350184 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Dec 13 13:14:47.350191 kernel: pid_max: default: 32768 minimum: 301 Dec 13 13:14:47.350198 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 13:14:47.350204 kernel: landlock: Up and running. Dec 13 13:14:47.350213 kernel: SELinux: Initializing. Dec 13 13:14:47.350219 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 13:14:47.350226 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 13:14:47.350233 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 13:14:47.350239 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 13:14:47.350246 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Dec 13 13:14:47.350260 kernel: Hyper-V: Host Build 10.0.22477.1594-1-0 Dec 13 13:14:47.350267 kernel: Hyper-V: enabling crash_kexec_post_notifiers Dec 13 13:14:47.350274 kernel: rcu: Hierarchical SRCU implementation. Dec 13 13:14:47.350281 kernel: rcu: Max phase no-delay instances is 400. Dec 13 13:14:47.350287 kernel: Remapping and enabling EFI services. Dec 13 13:14:47.350296 kernel: smp: Bringing up secondary CPUs ... Dec 13 13:14:47.350303 kernel: Detected PIPT I-cache on CPU1 Dec 13 13:14:47.350310 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Dec 13 13:14:47.350317 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 13:14:47.350324 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Dec 13 13:14:47.350332 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 13:14:47.350339 kernel: SMP: Total of 2 processors activated. Dec 13 13:14:47.350346 kernel: CPU features: detected: 32-bit EL0 Support Dec 13 13:14:47.350353 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Dec 13 13:14:47.350360 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 13 13:14:47.350367 kernel: CPU features: detected: CRC32 instructions Dec 13 13:14:47.350374 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 13 13:14:47.350381 kernel: CPU features: detected: LSE atomic instructions Dec 13 13:14:47.350388 kernel: CPU features: detected: Privileged Access Never Dec 13 13:14:47.350396 kernel: CPU: All CPU(s) started at EL1 Dec 13 13:14:47.350403 kernel: alternatives: applying system-wide alternatives Dec 13 13:14:47.350410 kernel: devtmpfs: initialized Dec 13 13:14:47.350417 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 13:14:47.350424 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 13:14:47.350431 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 13:14:47.350438 kernel: SMBIOS 3.1.0 present. Dec 13 13:14:47.350445 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Dec 13 13:14:47.350452 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 13:14:47.350461 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 13 13:14:47.350468 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 13 13:14:47.350475 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 13 13:14:47.350482 kernel: audit: initializing netlink subsys (disabled) Dec 13 13:14:47.350489 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Dec 13 13:14:47.350496 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 13:14:47.350503 kernel: cpuidle: using governor menu Dec 13 13:14:47.350510 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 13 13:14:47.350517 kernel: ASID allocator initialised with 32768 entries Dec 13 13:14:47.350525 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 13:14:47.350532 kernel: Serial: AMBA PL011 UART driver Dec 13 13:14:47.350539 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Dec 13 13:14:47.350546 kernel: Modules: 0 pages in range for non-PLT usage Dec 13 13:14:47.350553 kernel: Modules: 508880 pages in range for PLT usage Dec 13 13:14:47.350560 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 13:14:47.350567 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 13:14:47.350574 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 13 13:14:47.350581 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 13 13:14:47.350604 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 13:14:47.350611 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 13:14:47.350618 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 13 13:14:47.350625 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 13 13:14:47.350632 kernel: ACPI: Added _OSI(Module Device) Dec 13 13:14:47.350639 kernel: ACPI: Added _OSI(Processor Device) Dec 13 13:14:47.350646 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 13:14:47.350653 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 13:14:47.350659 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 13:14:47.350668 kernel: ACPI: Interpreter enabled Dec 13 13:14:47.350675 kernel: ACPI: Using GIC for interrupt routing Dec 13 13:14:47.350682 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Dec 13 13:14:47.350689 kernel: printk: console [ttyAMA0] enabled Dec 13 13:14:47.350696 kernel: printk: bootconsole [pl11] disabled Dec 13 13:14:47.350702 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Dec 13 13:14:47.350709 kernel: iommu: Default domain type: Translated Dec 13 13:14:47.350716 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 13 13:14:47.350723 kernel: efivars: Registered efivars operations Dec 13 13:14:47.350742 kernel: vgaarb: loaded Dec 13 13:14:47.350750 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 13 13:14:47.350757 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 13:14:47.350764 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 13:14:47.350771 kernel: pnp: PnP ACPI init Dec 13 13:14:47.350778 kernel: pnp: PnP ACPI: found 0 devices Dec 13 13:14:47.350785 kernel: NET: Registered PF_INET protocol family Dec 13 13:14:47.350792 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 13:14:47.350799 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 13:14:47.350808 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 13:14:47.350815 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 13:14:47.350822 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 13 13:14:47.350829 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 13:14:47.350836 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 13:14:47.350843 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 13:14:47.350850 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 13:14:47.350857 kernel: PCI: CLS 0 bytes, default 64 Dec 13 13:14:47.350864 kernel: kvm [1]: HYP mode not available Dec 13 13:14:47.350873 kernel: Initialise system trusted keyrings Dec 13 13:14:47.350879 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 13:14:47.350886 kernel: Key type asymmetric registered Dec 13 13:14:47.350893 kernel: Asymmetric key parser 'x509' registered Dec 13 13:14:47.350900 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 13 13:14:47.350907 kernel: io scheduler mq-deadline registered Dec 13 13:14:47.350914 kernel: io scheduler kyber registered Dec 13 13:14:47.350921 kernel: io scheduler bfq registered Dec 13 13:14:47.350928 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 13:14:47.350936 kernel: thunder_xcv, ver 1.0 Dec 13 13:14:47.350943 kernel: thunder_bgx, ver 1.0 Dec 13 13:14:47.350950 kernel: nicpf, ver 1.0 Dec 13 13:14:47.350957 kernel: nicvf, ver 1.0 Dec 13 13:14:47.351091 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 13 13:14:47.351160 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-12-13T13:14:46 UTC (1734095686) Dec 13 13:14:47.351169 kernel: efifb: probing for efifb Dec 13 13:14:47.351177 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Dec 13 13:14:47.351186 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Dec 13 13:14:47.351193 kernel: efifb: scrolling: redraw Dec 13 13:14:47.351199 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Dec 13 13:14:47.351207 kernel: Console: switching to colour frame buffer device 128x48 Dec 13 13:14:47.351213 kernel: fb0: EFI VGA frame buffer device Dec 13 13:14:47.351220 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Dec 13 13:14:47.351227 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 13:14:47.351234 kernel: No ACPI PMU IRQ for CPU0 Dec 13 13:14:47.351241 kernel: No ACPI PMU IRQ for CPU1 Dec 13 13:14:47.351249 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Dec 13 13:14:47.351256 kernel: watchdog: Delayed init of the lockup detector failed: -19 Dec 13 13:14:47.351263 kernel: watchdog: Hard watchdog permanently disabled Dec 13 13:14:47.351270 kernel: NET: Registered PF_INET6 protocol family Dec 13 13:14:47.351277 kernel: Segment Routing with IPv6 Dec 13 13:14:47.351284 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 13:14:47.351291 kernel: NET: Registered PF_PACKET protocol family Dec 13 13:14:47.351297 kernel: Key type dns_resolver registered Dec 13 13:14:47.351304 kernel: registered taskstats version 1 Dec 13 13:14:47.351313 kernel: Loading compiled-in X.509 certificates Dec 13 13:14:47.351320 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: 752b3e36c6039904ea643ccad2b3f5f3cb4ebf78' Dec 13 13:14:47.351327 kernel: Key type .fscrypt registered Dec 13 13:14:47.351333 kernel: Key type fscrypt-provisioning registered Dec 13 13:14:47.351340 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 13:14:47.351348 kernel: ima: Allocated hash algorithm: sha1 Dec 13 13:14:47.351355 kernel: ima: No architecture policies found Dec 13 13:14:47.351362 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 13 13:14:47.351368 kernel: clk: Disabling unused clocks Dec 13 13:14:47.351376 kernel: Freeing unused kernel memory: 39936K Dec 13 13:14:47.351383 kernel: Run /init as init process Dec 13 13:14:47.351390 kernel: with arguments: Dec 13 13:14:47.351397 kernel: /init Dec 13 13:14:47.351403 kernel: with environment: Dec 13 13:14:47.351410 kernel: HOME=/ Dec 13 13:14:47.351417 kernel: TERM=linux Dec 13 13:14:47.351424 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 13:14:47.351433 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 13:14:47.351443 systemd[1]: Detected virtualization microsoft. Dec 13 13:14:47.351451 systemd[1]: Detected architecture arm64. Dec 13 13:14:47.351458 systemd[1]: Running in initrd. Dec 13 13:14:47.351466 systemd[1]: No hostname configured, using default hostname. Dec 13 13:14:47.351473 systemd[1]: Hostname set to . Dec 13 13:14:47.351480 systemd[1]: Initializing machine ID from random generator. Dec 13 13:14:47.351488 systemd[1]: Queued start job for default target initrd.target. Dec 13 13:14:47.351497 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 13:14:47.351505 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 13:14:47.351513 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 13:14:47.351520 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 13:14:47.351528 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 13:14:47.351536 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 13:14:47.351544 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 13:14:47.351554 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 13:14:47.351561 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 13:14:47.351569 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 13:14:47.351576 systemd[1]: Reached target paths.target - Path Units. Dec 13 13:14:47.351583 systemd[1]: Reached target slices.target - Slice Units. Dec 13 13:14:47.351591 systemd[1]: Reached target swap.target - Swaps. Dec 13 13:14:47.351598 systemd[1]: Reached target timers.target - Timer Units. Dec 13 13:14:47.351605 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 13:14:47.351614 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 13:14:47.351622 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 13:14:47.351629 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 13:14:47.351637 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 13:14:47.351644 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 13:14:47.351652 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 13:14:47.351659 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 13:14:47.351667 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 13:14:47.351674 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 13:14:47.351683 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 13:14:47.351690 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 13:14:47.351698 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 13:14:47.351705 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 13:14:47.357390 systemd-journald[218]: Collecting audit messages is disabled. Dec 13 13:14:47.357432 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:14:47.357442 systemd-journald[218]: Journal started Dec 13 13:14:47.357464 systemd-journald[218]: Runtime Journal (/run/log/journal/5b3afe2b946a4c1988276eca0cc9c033) is 8.0M, max 78.5M, 70.5M free. Dec 13 13:14:47.357842 systemd-modules-load[219]: Inserted module 'overlay' Dec 13 13:14:47.373062 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 13:14:47.391748 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 13:14:47.392090 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 13:14:47.409450 kernel: Bridge firewalling registered Dec 13 13:14:47.403178 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 13:14:47.408478 systemd-modules-load[219]: Inserted module 'br_netfilter' Dec 13 13:14:47.417236 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 13:14:47.427646 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 13:14:47.438263 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:14:47.460974 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 13:14:47.477790 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 13:14:47.498864 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 13:14:47.516886 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 13:14:47.526762 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:14:47.542772 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:14:47.567764 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 13:14:47.576747 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 13:14:47.603179 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 13:14:47.618339 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 13:14:47.637449 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 13:14:47.657574 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 13:14:47.678484 dracut-cmdline[250]: dracut-dracut-053 Dec 13 13:14:47.678484 dracut-cmdline[250]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=c48af8adabdaf1d8e07ceb011d2665929c607ddf2c4d40203b31334d745cc472 Dec 13 13:14:47.720951 systemd-resolved[253]: Positive Trust Anchors: Dec 13 13:14:47.720967 systemd-resolved[253]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 13:14:47.720997 systemd-resolved[253]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 13:14:47.723620 systemd-resolved[253]: Defaulting to hostname 'linux'. Dec 13 13:14:47.725812 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 13:14:47.733236 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 13:14:47.838777 kernel: SCSI subsystem initialized Dec 13 13:14:47.847751 kernel: Loading iSCSI transport class v2.0-870. Dec 13 13:14:47.858810 kernel: iscsi: registered transport (tcp) Dec 13 13:14:47.877051 kernel: iscsi: registered transport (qla4xxx) Dec 13 13:14:47.877105 kernel: QLogic iSCSI HBA Driver Dec 13 13:14:47.915400 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 13:14:47.933930 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 13:14:47.974837 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 13:14:47.974895 kernel: device-mapper: uevent: version 1.0.3 Dec 13 13:14:47.982045 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 13:14:48.031750 kernel: raid6: neonx8 gen() 15771 MB/s Dec 13 13:14:48.051739 kernel: raid6: neonx4 gen() 15799 MB/s Dec 13 13:14:48.071736 kernel: raid6: neonx2 gen() 13344 MB/s Dec 13 13:14:48.092742 kernel: raid6: neonx1 gen() 10543 MB/s Dec 13 13:14:48.112735 kernel: raid6: int64x8 gen() 6792 MB/s Dec 13 13:14:48.132736 kernel: raid6: int64x4 gen() 7359 MB/s Dec 13 13:14:48.153737 kernel: raid6: int64x2 gen() 6112 MB/s Dec 13 13:14:48.177264 kernel: raid6: int64x1 gen() 5059 MB/s Dec 13 13:14:48.177277 kernel: raid6: using algorithm neonx4 gen() 15799 MB/s Dec 13 13:14:48.200793 kernel: raid6: .... xor() 12447 MB/s, rmw enabled Dec 13 13:14:48.200805 kernel: raid6: using neon recovery algorithm Dec 13 13:14:48.212530 kernel: xor: measuring software checksum speed Dec 13 13:14:48.212544 kernel: 8regs : 21630 MB/sec Dec 13 13:14:48.215882 kernel: 32regs : 21699 MB/sec Dec 13 13:14:48.219258 kernel: arm64_neon : 28013 MB/sec Dec 13 13:14:48.223600 kernel: xor: using function: arm64_neon (28013 MB/sec) Dec 13 13:14:48.273756 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 13:14:48.284230 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 13:14:48.299882 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 13:14:48.322430 systemd-udevd[436]: Using default interface naming scheme 'v255'. Dec 13 13:14:48.327831 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 13:14:48.345849 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 13:14:48.379120 dracut-pre-trigger[451]: rd.md=0: removing MD RAID activation Dec 13 13:14:48.412197 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 13:14:48.426950 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 13:14:48.464254 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 13:14:48.482901 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 13:14:48.504014 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 13:14:48.520490 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 13:14:48.538782 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 13:14:48.552015 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 13:14:48.580760 kernel: hv_vmbus: Vmbus version:5.3 Dec 13 13:14:48.593020 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 13:14:48.630098 kernel: hv_vmbus: registering driver hyperv_keyboard Dec 13 13:14:48.630123 kernel: hv_vmbus: registering driver hid_hyperv Dec 13 13:14:48.630135 kernel: hv_vmbus: registering driver hv_netvsc Dec 13 13:14:48.630146 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Dec 13 13:14:48.630157 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Dec 13 13:14:48.624040 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 13:14:48.658468 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Dec 13 13:14:48.651342 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 13:14:48.678625 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 13:14:48.678648 kernel: hv_vmbus: registering driver hv_storvsc Dec 13 13:14:48.678657 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 13:14:48.651490 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:14:48.719646 kernel: scsi host1: storvsc_host_t Dec 13 13:14:48.719826 kernel: scsi host0: storvsc_host_t Dec 13 13:14:48.719913 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Dec 13 13:14:48.678919 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 13:14:48.691128 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 13:14:48.752508 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Dec 13 13:14:48.752551 kernel: hv_netvsc 000d3af7-acec-000d-3af7-acec000d3af7 eth0: VF slot 1 added Dec 13 13:14:48.691352 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:14:48.705828 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:14:48.739904 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:14:48.799540 kernel: PTP clock support registered Dec 13 13:14:48.799561 kernel: hv_vmbus: registering driver hv_pci Dec 13 13:14:48.762025 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 13:14:48.817008 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Dec 13 13:14:48.885893 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 13:14:48.885919 kernel: hv_pci b43e206d-cd91-428d-acea-9a73b5bc39c3: PCI VMBus probing: Using version 0x10004 Dec 13 13:14:49.310047 kernel: hv_utils: Registering HyperV Utility Driver Dec 13 13:14:49.310067 kernel: hv_pci b43e206d-cd91-428d-acea-9a73b5bc39c3: PCI host bridge to bus cd91:00 Dec 13 13:14:49.310205 kernel: hv_vmbus: registering driver hv_utils Dec 13 13:14:49.310217 kernel: pci_bus cd91:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Dec 13 13:14:49.310335 kernel: hv_utils: Shutdown IC version 3.2 Dec 13 13:14:49.310346 kernel: pci_bus cd91:00: No busn resource found for root bus, will use [bus 00-ff] Dec 13 13:14:49.310453 kernel: hv_utils: Heartbeat IC version 3.0 Dec 13 13:14:49.310476 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Dec 13 13:14:49.310812 kernel: pci cd91:00:02.0: [15b3:1018] type 00 class 0x020000 Dec 13 13:14:49.310990 kernel: hv_utils: TimeSync IC version 4.0 Dec 13 13:14:49.311002 kernel: pci cd91:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Dec 13 13:14:49.311091 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Dec 13 13:14:49.311207 kernel: pci cd91:00:02.0: enabling Extended Tags Dec 13 13:14:49.311303 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Dec 13 13:14:49.311390 kernel: sd 0:0:0:0: [sda] Write Protect is off Dec 13 13:14:49.311474 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Dec 13 13:14:49.311553 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Dec 13 13:14:49.311633 kernel: pci cd91:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at cd91:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Dec 13 13:14:49.311721 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 13:14:49.311730 kernel: pci_bus cd91:00: busn_res: [bus 00-ff] end is updated to 00 Dec 13 13:14:49.311808 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Dec 13 13:14:49.311885 kernel: pci cd91:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Dec 13 13:14:48.762114 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:14:48.807072 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:14:48.839196 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:14:48.877143 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 13:14:49.218697 systemd-resolved[253]: Clock change detected. Flushing caches. Dec 13 13:14:49.302351 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:14:49.371262 kernel: mlx5_core cd91:00:02.0: enabling device (0000 -> 0002) Dec 13 13:14:49.587580 kernel: mlx5_core cd91:00:02.0: firmware version: 16.30.1284 Dec 13 13:14:49.587701 kernel: hv_netvsc 000d3af7-acec-000d-3af7-acec000d3af7 eth0: VF registering: eth1 Dec 13 13:14:49.587787 kernel: mlx5_core cd91:00:02.0 eth1: joined to eth0 Dec 13 13:14:49.587877 kernel: mlx5_core cd91:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Dec 13 13:14:49.595171 kernel: mlx5_core cd91:00:02.0 enP52625s1: renamed from eth1 Dec 13 13:14:49.821949 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Dec 13 13:14:49.935203 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (500) Dec 13 13:14:49.949176 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Dec 13 13:14:49.973576 kernel: BTRFS: device fsid 47b12626-f7d3-4179-9720-ca262eb4c614 devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (495) Dec 13 13:14:49.985089 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Dec 13 13:14:50.003423 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Dec 13 13:14:50.010608 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Dec 13 13:14:50.042311 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 13:14:50.066206 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 13:14:50.074157 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 13:14:51.084743 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 13:14:51.084800 disk-uuid[603]: The operation has completed successfully. Dec 13 13:14:51.145451 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 13:14:51.145554 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 13:14:51.175276 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 13:14:51.189109 sh[689]: Success Dec 13 13:14:51.229333 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Dec 13 13:14:51.471581 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 13:14:51.484576 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 13:14:51.495280 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 13:14:51.526698 kernel: BTRFS info (device dm-0): first mount of filesystem 47b12626-f7d3-4179-9720-ca262eb4c614 Dec 13 13:14:51.526748 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 13 13:14:51.534290 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 13:14:51.539956 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 13:14:51.544574 kernel: BTRFS info (device dm-0): using free space tree Dec 13 13:14:51.928780 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 13:14:51.934868 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 13:14:51.957375 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 13:14:51.970312 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 13:14:52.002400 kernel: BTRFS info (device sda6): first mount of filesystem d0a3d620-8ab2-45d8-a26c-bb488ffd59f2 Dec 13 13:14:52.002422 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 13:14:52.002438 kernel: BTRFS info (device sda6): using free space tree Dec 13 13:14:52.012262 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 13:14:52.029240 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 13:14:52.034259 kernel: BTRFS info (device sda6): last unmount of filesystem d0a3d620-8ab2-45d8-a26c-bb488ffd59f2 Dec 13 13:14:52.041215 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 13:14:52.057462 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 13:14:52.106442 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 13:14:52.126269 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 13:14:52.152833 systemd-networkd[873]: lo: Link UP Dec 13 13:14:52.152846 systemd-networkd[873]: lo: Gained carrier Dec 13 13:14:52.154423 systemd-networkd[873]: Enumeration completed Dec 13 13:14:52.156722 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 13:14:52.157564 systemd-networkd[873]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:14:52.157567 systemd-networkd[873]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 13:14:52.166562 systemd[1]: Reached target network.target - Network. Dec 13 13:14:52.244153 kernel: mlx5_core cd91:00:02.0 enP52625s1: Link up Dec 13 13:14:52.284157 kernel: hv_netvsc 000d3af7-acec-000d-3af7-acec000d3af7 eth0: Data path switched to VF: enP52625s1 Dec 13 13:14:52.284414 systemd-networkd[873]: enP52625s1: Link UP Dec 13 13:14:52.284646 systemd-networkd[873]: eth0: Link UP Dec 13 13:14:52.284993 systemd-networkd[873]: eth0: Gained carrier Dec 13 13:14:52.285003 systemd-networkd[873]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:14:52.294784 systemd-networkd[873]: enP52625s1: Gained carrier Dec 13 13:14:52.319187 systemd-networkd[873]: eth0: DHCPv4 address 10.200.20.34/24, gateway 10.200.20.1 acquired from 168.63.129.16 Dec 13 13:14:52.920085 ignition[805]: Ignition 2.20.0 Dec 13 13:14:52.920097 ignition[805]: Stage: fetch-offline Dec 13 13:14:52.924585 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 13:14:52.920151 ignition[805]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:14:52.920159 ignition[805]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 13:14:52.920259 ignition[805]: parsed url from cmdline: "" Dec 13 13:14:52.920263 ignition[805]: no config URL provided Dec 13 13:14:52.920267 ignition[805]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 13:14:52.952357 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 13 13:14:52.920274 ignition[805]: no config at "/usr/lib/ignition/user.ign" Dec 13 13:14:52.920278 ignition[805]: failed to fetch config: resource requires networking Dec 13 13:14:52.920448 ignition[805]: Ignition finished successfully Dec 13 13:14:52.977951 ignition[884]: Ignition 2.20.0 Dec 13 13:14:52.977959 ignition[884]: Stage: fetch Dec 13 13:14:52.978158 ignition[884]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:14:52.978168 ignition[884]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 13:14:52.978266 ignition[884]: parsed url from cmdline: "" Dec 13 13:14:52.978269 ignition[884]: no config URL provided Dec 13 13:14:52.978274 ignition[884]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 13:14:52.978281 ignition[884]: no config at "/usr/lib/ignition/user.ign" Dec 13 13:14:52.978306 ignition[884]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Dec 13 13:14:53.066273 ignition[884]: GET result: OK Dec 13 13:14:53.066372 ignition[884]: config has been read from IMDS userdata Dec 13 13:14:53.066412 ignition[884]: parsing config with SHA512: 670c85fc2ed921b3d0f08f163660321975b731cef3f287b83f3d8db020ee05a6fb2b6caa7a3d697df467858db27948c4448f147a2a62deb6953592c1ab276259 Dec 13 13:14:53.071548 unknown[884]: fetched base config from "system" Dec 13 13:14:53.071975 ignition[884]: fetch: fetch complete Dec 13 13:14:53.071558 unknown[884]: fetched base config from "system" Dec 13 13:14:53.071980 ignition[884]: fetch: fetch passed Dec 13 13:14:53.071563 unknown[884]: fetched user config from "azure" Dec 13 13:14:53.072023 ignition[884]: Ignition finished successfully Dec 13 13:14:53.076473 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 13 13:14:53.101258 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 13:14:53.114956 ignition[890]: Ignition 2.20.0 Dec 13 13:14:53.118058 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 13:14:53.114962 ignition[890]: Stage: kargs Dec 13 13:14:53.115227 ignition[890]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:14:53.139321 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 13:14:53.115238 ignition[890]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 13:14:53.116155 ignition[890]: kargs: kargs passed Dec 13 13:14:53.116199 ignition[890]: Ignition finished successfully Dec 13 13:14:53.170461 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 13:14:53.161446 ignition[896]: Ignition 2.20.0 Dec 13 13:14:53.179553 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 13:14:53.161452 ignition[896]: Stage: disks Dec 13 13:14:53.190514 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 13:14:53.161708 ignition[896]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:14:53.200037 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 13:14:53.161718 ignition[896]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 13:14:53.210986 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 13:14:53.167651 ignition[896]: disks: disks passed Dec 13 13:14:53.219442 systemd[1]: Reached target basic.target - Basic System. Dec 13 13:14:53.167702 ignition[896]: Ignition finished successfully Dec 13 13:14:53.246353 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 13:14:53.311344 systemd-fsck[904]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Dec 13 13:14:53.318418 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 13:14:53.340329 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 13:14:53.397147 kernel: EXT4-fs (sda9): mounted filesystem 0aa4851d-a2ba-4d04-90b3-5d00bf608ecc r/w with ordered data mode. Quota mode: none. Dec 13 13:14:53.397787 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 13:14:53.402871 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 13:14:53.448207 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 13:14:53.458830 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 13:14:53.468305 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Dec 13 13:14:53.474488 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 13:14:53.474524 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 13:14:53.501357 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 13:14:53.523385 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (915) Dec 13 13:14:53.522311 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 13:14:53.548400 kernel: BTRFS info (device sda6): first mount of filesystem d0a3d620-8ab2-45d8-a26c-bb488ffd59f2 Dec 13 13:14:53.548433 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 13:14:53.548444 kernel: BTRFS info (device sda6): using free space tree Dec 13 13:14:53.547645 systemd-networkd[873]: eth0: Gained IPv6LL Dec 13 13:14:53.560338 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 13:14:53.561780 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 13:14:53.735223 systemd-networkd[873]: enP52625s1: Gained IPv6LL Dec 13 13:14:54.027515 coreos-metadata[917]: Dec 13 13:14:54.027 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 13 13:14:54.035728 coreos-metadata[917]: Dec 13 13:14:54.035 INFO Fetch successful Dec 13 13:14:54.035728 coreos-metadata[917]: Dec 13 13:14:54.035 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Dec 13 13:14:54.053847 coreos-metadata[917]: Dec 13 13:14:54.053 INFO Fetch successful Dec 13 13:14:54.112177 coreos-metadata[917]: Dec 13 13:14:54.112 INFO wrote hostname ci-4186.0.0-a-128d80e197 to /sysroot/etc/hostname Dec 13 13:14:54.122368 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 13:14:54.318073 initrd-setup-root[945]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 13:14:54.357224 initrd-setup-root[952]: cut: /sysroot/etc/group: No such file or directory Dec 13 13:14:54.376957 initrd-setup-root[959]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 13:14:54.385854 initrd-setup-root[966]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 13:14:55.148976 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 13:14:55.166375 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 13:14:55.180515 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 13:14:55.202461 kernel: BTRFS info (device sda6): last unmount of filesystem d0a3d620-8ab2-45d8-a26c-bb488ffd59f2 Dec 13 13:14:55.199741 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 13:14:55.222985 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 13:14:55.238402 ignition[1036]: INFO : Ignition 2.20.0 Dec 13 13:14:55.244751 ignition[1036]: INFO : Stage: mount Dec 13 13:14:55.244751 ignition[1036]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:14:55.244751 ignition[1036]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 13:14:55.244751 ignition[1036]: INFO : mount: mount passed Dec 13 13:14:55.244751 ignition[1036]: INFO : Ignition finished successfully Dec 13 13:14:55.245669 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 13:14:55.266240 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 13:14:55.281793 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 13:14:55.336716 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1045) Dec 13 13:14:55.336761 kernel: BTRFS info (device sda6): first mount of filesystem d0a3d620-8ab2-45d8-a26c-bb488ffd59f2 Dec 13 13:14:55.343647 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 13:14:55.348690 kernel: BTRFS info (device sda6): using free space tree Dec 13 13:14:55.356147 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 13:14:55.358008 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 13:14:55.386829 ignition[1063]: INFO : Ignition 2.20.0 Dec 13 13:14:55.386829 ignition[1063]: INFO : Stage: files Dec 13 13:14:55.396158 ignition[1063]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:14:55.396158 ignition[1063]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 13:14:55.396158 ignition[1063]: DEBUG : files: compiled without relabeling support, skipping Dec 13 13:14:55.396158 ignition[1063]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 13:14:55.396158 ignition[1063]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 13:14:55.457122 ignition[1063]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 13:14:55.465888 ignition[1063]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 13:14:55.465888 ignition[1063]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 13:14:55.457560 unknown[1063]: wrote ssh authorized keys file for user: core Dec 13 13:14:55.488959 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 13:14:55.488959 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Dec 13 13:14:55.550352 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 13:14:55.743218 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 13:14:55.743218 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 13:14:55.766646 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Dec 13 13:14:56.194525 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 13 13:14:56.257120 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 13:14:56.257120 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 13 13:14:56.278072 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 13:14:56.278072 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 13:14:56.278072 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 13:14:56.278072 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 13:14:56.278072 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 13:14:56.278072 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 13:14:56.278072 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 13:14:56.278072 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 13:14:56.278072 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 13:14:56.278072 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Dec 13 13:14:56.278072 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Dec 13 13:14:56.278072 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Dec 13 13:14:56.278072 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Dec 13 13:14:56.671995 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 13 13:14:56.842771 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Dec 13 13:14:56.842771 ignition[1063]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Dec 13 13:14:56.871848 ignition[1063]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 13:14:56.884397 ignition[1063]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 13:14:56.884397 ignition[1063]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Dec 13 13:14:56.884397 ignition[1063]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Dec 13 13:14:56.884397 ignition[1063]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 13:14:56.884397 ignition[1063]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 13:14:56.884397 ignition[1063]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 13:14:56.884397 ignition[1063]: INFO : files: files passed Dec 13 13:14:56.884397 ignition[1063]: INFO : Ignition finished successfully Dec 13 13:14:56.884243 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 13:14:56.918889 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 13:14:56.943317 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 13:14:56.965978 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 13:14:56.966067 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 13:14:57.008733 initrd-setup-root-after-ignition[1090]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 13:14:57.008733 initrd-setup-root-after-ignition[1090]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 13:14:57.027816 initrd-setup-root-after-ignition[1094]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 13:14:57.018946 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 13:14:57.034796 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 13:14:57.063381 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 13:14:57.094996 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 13:14:57.095180 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 13:14:57.107587 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 13:14:57.120488 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 13:14:57.131794 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 13:14:57.146387 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 13:14:57.165767 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 13:14:57.182612 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 13:14:57.201498 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 13:14:57.208436 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 13:14:57.221376 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 13:14:57.232791 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 13:14:57.232918 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 13:14:57.249264 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 13:14:57.255287 systemd[1]: Stopped target basic.target - Basic System. Dec 13 13:14:57.267164 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 13:14:57.278874 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 13:14:57.290414 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 13:14:57.302686 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 13:14:57.315516 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 13:14:57.329011 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 13:14:57.341962 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 13:14:57.355143 systemd[1]: Stopped target swap.target - Swaps. Dec 13 13:14:57.364995 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 13:14:57.365120 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 13:14:57.380444 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 13:14:57.386729 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 13:14:57.398762 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 13:14:57.398834 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 13:14:57.412014 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 13:14:57.412157 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 13:14:57.430045 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 13:14:57.430197 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 13:14:57.437573 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 13:14:57.437667 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 13:14:57.448458 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 13 13:14:57.448555 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 13:14:57.481423 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 13:14:57.537638 ignition[1115]: INFO : Ignition 2.20.0 Dec 13 13:14:57.537638 ignition[1115]: INFO : Stage: umount Dec 13 13:14:57.537638 ignition[1115]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:14:57.537638 ignition[1115]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 13:14:57.537638 ignition[1115]: INFO : umount: umount passed Dec 13 13:14:57.537638 ignition[1115]: INFO : Ignition finished successfully Dec 13 13:14:57.497381 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 13:14:57.516031 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 13:14:57.516259 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 13:14:57.523615 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 13:14:57.523767 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 13:14:57.549080 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 13:14:57.549213 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 13:14:57.559637 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 13:14:57.559897 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 13:14:57.573393 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 13:14:57.573458 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 13:14:57.592290 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 13:14:57.592358 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 13 13:14:57.602984 systemd[1]: Stopped target network.target - Network. Dec 13 13:14:57.613373 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 13:14:57.613441 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 13:14:57.625827 systemd[1]: Stopped target paths.target - Path Units. Dec 13 13:14:57.637306 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 13:14:57.641159 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 13:14:57.649514 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 13:14:57.661092 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 13:14:57.672649 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 13:14:57.672703 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 13:14:57.683286 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 13:14:57.683327 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 13:14:57.694019 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 13:14:57.694074 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 13:14:57.705846 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 13:14:57.705900 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 13:14:57.717373 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 13:14:57.723542 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 13:14:57.735770 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 13:14:57.736418 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 13:14:57.736516 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 13:14:57.974604 kernel: hv_netvsc 000d3af7-acec-000d-3af7-acec000d3af7 eth0: Data path switched from VF: enP52625s1 Dec 13 13:14:57.747789 systemd-networkd[873]: eth0: DHCPv6 lease lost Dec 13 13:14:57.750908 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 13:14:57.751283 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 13:14:57.765671 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 13:14:57.765822 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 13:14:57.779245 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 13:14:57.779302 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 13:14:57.809352 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 13:14:57.823670 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 13:14:57.823743 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 13:14:57.835855 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 13:14:57.835908 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:14:57.849644 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 13:14:57.849694 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 13:14:57.860667 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 13:14:57.860717 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 13:14:57.872963 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 13:14:57.913810 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 13:14:57.914231 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 13:14:57.926335 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 13:14:57.926387 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 13:14:57.937655 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 13:14:57.937690 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 13:14:57.957266 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 13:14:57.957325 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 13:14:57.974676 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 13:14:57.974740 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 13:14:57.984692 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 13:14:57.984757 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:14:58.025388 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 13:14:58.038424 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 13:14:58.038494 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 13:14:58.051233 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 13 13:14:58.051297 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 13:14:58.070749 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 13:14:58.070816 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 13:14:58.082534 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 13:14:58.082585 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:14:58.094590 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 13:14:58.094699 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 13:14:58.106323 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 13:14:58.313799 systemd-journald[218]: Received SIGTERM from PID 1 (systemd). Dec 13 13:14:58.106420 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 13:14:58.117108 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 13:14:58.117209 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 13:14:58.130981 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 13:14:58.143272 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 13:14:58.143369 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 13:14:58.172387 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 13:14:58.191547 systemd[1]: Switching root. Dec 13 13:14:58.359042 systemd-journald[218]: Journal stopped Dec 13 13:14:47.348347 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Dec 13 13:14:47.348368 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Fri Dec 13 11:56:07 -00 2024 Dec 13 13:14:47.348376 kernel: KASLR enabled Dec 13 13:14:47.348382 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Dec 13 13:14:47.348389 kernel: printk: bootconsole [pl11] enabled Dec 13 13:14:47.348394 kernel: efi: EFI v2.7 by EDK II Dec 13 13:14:47.348401 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f20e698 RNG=0x3fd5f998 MEMRESERVE=0x3e477598 Dec 13 13:14:47.348407 kernel: random: crng init done Dec 13 13:14:47.348413 kernel: secureboot: Secure boot disabled Dec 13 13:14:47.348418 kernel: ACPI: Early table checksum verification disabled Dec 13 13:14:47.348424 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Dec 13 13:14:47.348430 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 13:14:47.348435 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 13:14:47.348443 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Dec 13 13:14:47.348450 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 13:14:47.348456 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 13:14:47.348462 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 13:14:47.348469 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 13:14:47.348475 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 13:14:47.348481 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 13:14:47.348487 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Dec 13 13:14:47.348493 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 13:14:47.348499 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Dec 13 13:14:47.348505 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Dec 13 13:14:47.348511 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Dec 13 13:14:47.348517 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Dec 13 13:14:47.348523 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Dec 13 13:14:47.348529 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Dec 13 13:14:47.348537 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Dec 13 13:14:47.348543 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Dec 13 13:14:47.348549 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Dec 13 13:14:47.348555 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Dec 13 13:14:47.348561 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Dec 13 13:14:47.348567 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Dec 13 13:14:47.348572 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Dec 13 13:14:47.348578 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Dec 13 13:14:47.348584 kernel: Zone ranges: Dec 13 13:14:47.348590 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Dec 13 13:14:47.348596 kernel: DMA32 empty Dec 13 13:14:47.348603 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Dec 13 13:14:47.348613 kernel: Movable zone start for each node Dec 13 13:14:47.348619 kernel: Early memory node ranges Dec 13 13:14:47.348626 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Dec 13 13:14:47.348632 kernel: node 0: [mem 0x0000000000824000-0x000000003e45ffff] Dec 13 13:14:47.348638 kernel: node 0: [mem 0x000000003e460000-0x000000003e46ffff] Dec 13 13:14:47.348646 kernel: node 0: [mem 0x000000003e470000-0x000000003e54ffff] Dec 13 13:14:47.348653 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Dec 13 13:14:47.348659 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Dec 13 13:14:47.348665 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Dec 13 13:14:47.348671 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Dec 13 13:14:47.348678 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Dec 13 13:14:47.348684 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Dec 13 13:14:47.348691 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Dec 13 13:14:47.348697 kernel: psci: probing for conduit method from ACPI. Dec 13 13:14:47.348704 kernel: psci: PSCIv1.1 detected in firmware. Dec 13 13:14:47.348710 kernel: psci: Using standard PSCI v0.2 function IDs Dec 13 13:14:47.348716 kernel: psci: MIGRATE_INFO_TYPE not supported. Dec 13 13:14:47.349814 kernel: psci: SMC Calling Convention v1.4 Dec 13 13:14:47.349837 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Dec 13 13:14:47.349845 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Dec 13 13:14:47.349851 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Dec 13 13:14:47.349858 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Dec 13 13:14:47.349865 kernel: pcpu-alloc: [0] 0 [0] 1 Dec 13 13:14:47.349871 kernel: Detected PIPT I-cache on CPU0 Dec 13 13:14:47.349878 kernel: CPU features: detected: GIC system register CPU interface Dec 13 13:14:47.349884 kernel: CPU features: detected: Hardware dirty bit management Dec 13 13:14:47.349891 kernel: CPU features: detected: Spectre-BHB Dec 13 13:14:47.349897 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 13 13:14:47.349910 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 13 13:14:47.349916 kernel: CPU features: detected: ARM erratum 1418040 Dec 13 13:14:47.349923 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Dec 13 13:14:47.349929 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 13 13:14:47.349936 kernel: alternatives: applying boot alternatives Dec 13 13:14:47.349944 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=c48af8adabdaf1d8e07ceb011d2665929c607ddf2c4d40203b31334d745cc472 Dec 13 13:14:47.349951 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 13:14:47.349958 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 13:14:47.349964 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 13:14:47.349971 kernel: Fallback order for Node 0: 0 Dec 13 13:14:47.349977 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Dec 13 13:14:47.349985 kernel: Policy zone: Normal Dec 13 13:14:47.349997 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 13:14:47.350003 kernel: software IO TLB: area num 2. Dec 13 13:14:47.350009 kernel: software IO TLB: mapped [mem 0x000000003a460000-0x000000003e460000] (64MB) Dec 13 13:14:47.350016 kernel: Memory: 3982056K/4194160K available (10304K kernel code, 2184K rwdata, 8088K rodata, 39936K init, 897K bss, 212104K reserved, 0K cma-reserved) Dec 13 13:14:47.350023 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 13:14:47.350030 kernel: trace event string verifier disabled Dec 13 13:14:47.350036 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 13:14:47.350043 kernel: rcu: RCU event tracing is enabled. Dec 13 13:14:47.350050 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 13:14:47.350057 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 13:14:47.350065 kernel: Tracing variant of Tasks RCU enabled. Dec 13 13:14:47.350071 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 13:14:47.350078 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 13:14:47.350084 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 13 13:14:47.350091 kernel: GICv3: 960 SPIs implemented Dec 13 13:14:47.350097 kernel: GICv3: 0 Extended SPIs implemented Dec 13 13:14:47.350103 kernel: Root IRQ handler: gic_handle_irq Dec 13 13:14:47.350110 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Dec 13 13:14:47.350117 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Dec 13 13:14:47.350123 kernel: ITS: No ITS available, not enabling LPIs Dec 13 13:14:47.350130 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 13:14:47.350136 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 13:14:47.350145 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Dec 13 13:14:47.350151 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Dec 13 13:14:47.350158 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Dec 13 13:14:47.350164 kernel: Console: colour dummy device 80x25 Dec 13 13:14:47.350171 kernel: printk: console [tty1] enabled Dec 13 13:14:47.350178 kernel: ACPI: Core revision 20230628 Dec 13 13:14:47.350184 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Dec 13 13:14:47.350191 kernel: pid_max: default: 32768 minimum: 301 Dec 13 13:14:47.350198 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 13:14:47.350204 kernel: landlock: Up and running. Dec 13 13:14:47.350213 kernel: SELinux: Initializing. Dec 13 13:14:47.350219 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 13:14:47.350226 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 13:14:47.350233 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 13:14:47.350239 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 13:14:47.350246 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Dec 13 13:14:47.350260 kernel: Hyper-V: Host Build 10.0.22477.1594-1-0 Dec 13 13:14:47.350267 kernel: Hyper-V: enabling crash_kexec_post_notifiers Dec 13 13:14:47.350274 kernel: rcu: Hierarchical SRCU implementation. Dec 13 13:14:47.350281 kernel: rcu: Max phase no-delay instances is 400. Dec 13 13:14:47.350287 kernel: Remapping and enabling EFI services. Dec 13 13:14:47.350296 kernel: smp: Bringing up secondary CPUs ... Dec 13 13:14:47.350303 kernel: Detected PIPT I-cache on CPU1 Dec 13 13:14:47.350310 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Dec 13 13:14:47.350317 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 13:14:47.350324 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Dec 13 13:14:47.350332 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 13:14:47.350339 kernel: SMP: Total of 2 processors activated. Dec 13 13:14:47.350346 kernel: CPU features: detected: 32-bit EL0 Support Dec 13 13:14:47.350353 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Dec 13 13:14:47.350360 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 13 13:14:47.350367 kernel: CPU features: detected: CRC32 instructions Dec 13 13:14:47.350374 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 13 13:14:47.350381 kernel: CPU features: detected: LSE atomic instructions Dec 13 13:14:47.350388 kernel: CPU features: detected: Privileged Access Never Dec 13 13:14:47.350396 kernel: CPU: All CPU(s) started at EL1 Dec 13 13:14:47.350403 kernel: alternatives: applying system-wide alternatives Dec 13 13:14:47.350410 kernel: devtmpfs: initialized Dec 13 13:14:47.350417 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 13:14:47.350424 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 13:14:47.350431 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 13:14:47.350438 kernel: SMBIOS 3.1.0 present. Dec 13 13:14:47.350445 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Dec 13 13:14:47.350452 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 13:14:47.350461 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 13 13:14:47.350468 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 13 13:14:47.350475 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 13 13:14:47.350482 kernel: audit: initializing netlink subsys (disabled) Dec 13 13:14:47.350489 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Dec 13 13:14:47.350496 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 13:14:47.350503 kernel: cpuidle: using governor menu Dec 13 13:14:47.350510 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 13 13:14:47.350517 kernel: ASID allocator initialised with 32768 entries Dec 13 13:14:47.350525 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 13:14:47.350532 kernel: Serial: AMBA PL011 UART driver Dec 13 13:14:47.350539 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Dec 13 13:14:47.350546 kernel: Modules: 0 pages in range for non-PLT usage Dec 13 13:14:47.350553 kernel: Modules: 508880 pages in range for PLT usage Dec 13 13:14:47.350560 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 13:14:47.350567 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 13:14:47.350574 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 13 13:14:47.350581 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 13 13:14:47.350604 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 13:14:47.350611 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 13:14:47.350618 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 13 13:14:47.350625 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 13 13:14:47.350632 kernel: ACPI: Added _OSI(Module Device) Dec 13 13:14:47.350639 kernel: ACPI: Added _OSI(Processor Device) Dec 13 13:14:47.350646 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 13:14:47.350653 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 13:14:47.350659 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 13:14:47.350668 kernel: ACPI: Interpreter enabled Dec 13 13:14:47.350675 kernel: ACPI: Using GIC for interrupt routing Dec 13 13:14:47.350682 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Dec 13 13:14:47.350689 kernel: printk: console [ttyAMA0] enabled Dec 13 13:14:47.350696 kernel: printk: bootconsole [pl11] disabled Dec 13 13:14:47.350702 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Dec 13 13:14:47.350709 kernel: iommu: Default domain type: Translated Dec 13 13:14:47.350716 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 13 13:14:47.350723 kernel: efivars: Registered efivars operations Dec 13 13:14:47.350742 kernel: vgaarb: loaded Dec 13 13:14:47.350750 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 13 13:14:47.350757 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 13:14:47.350764 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 13:14:47.350771 kernel: pnp: PnP ACPI init Dec 13 13:14:47.350778 kernel: pnp: PnP ACPI: found 0 devices Dec 13 13:14:47.350785 kernel: NET: Registered PF_INET protocol family Dec 13 13:14:47.350792 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 13:14:47.350799 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 13:14:47.350808 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 13:14:47.350815 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 13:14:47.350822 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 13 13:14:47.350829 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 13:14:47.350836 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 13:14:47.350843 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 13:14:47.350850 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 13:14:47.350857 kernel: PCI: CLS 0 bytes, default 64 Dec 13 13:14:47.350864 kernel: kvm [1]: HYP mode not available Dec 13 13:14:47.350873 kernel: Initialise system trusted keyrings Dec 13 13:14:47.350879 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 13:14:47.350886 kernel: Key type asymmetric registered Dec 13 13:14:47.350893 kernel: Asymmetric key parser 'x509' registered Dec 13 13:14:47.350900 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 13 13:14:47.350907 kernel: io scheduler mq-deadline registered Dec 13 13:14:47.350914 kernel: io scheduler kyber registered Dec 13 13:14:47.350921 kernel: io scheduler bfq registered Dec 13 13:14:47.350928 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 13:14:47.350936 kernel: thunder_xcv, ver 1.0 Dec 13 13:14:47.350943 kernel: thunder_bgx, ver 1.0 Dec 13 13:14:47.350950 kernel: nicpf, ver 1.0 Dec 13 13:14:47.350957 kernel: nicvf, ver 1.0 Dec 13 13:14:47.351091 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 13 13:14:47.351160 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-12-13T13:14:46 UTC (1734095686) Dec 13 13:14:47.351169 kernel: efifb: probing for efifb Dec 13 13:14:47.351177 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Dec 13 13:14:47.351186 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Dec 13 13:14:47.351193 kernel: efifb: scrolling: redraw Dec 13 13:14:47.351199 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Dec 13 13:14:47.351207 kernel: Console: switching to colour frame buffer device 128x48 Dec 13 13:14:47.351213 kernel: fb0: EFI VGA frame buffer device Dec 13 13:14:47.351220 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Dec 13 13:14:47.351227 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 13:14:47.351234 kernel: No ACPI PMU IRQ for CPU0 Dec 13 13:14:47.351241 kernel: No ACPI PMU IRQ for CPU1 Dec 13 13:14:47.351249 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Dec 13 13:14:47.351256 kernel: watchdog: Delayed init of the lockup detector failed: -19 Dec 13 13:14:47.351263 kernel: watchdog: Hard watchdog permanently disabled Dec 13 13:14:47.351270 kernel: NET: Registered PF_INET6 protocol family Dec 13 13:14:47.351277 kernel: Segment Routing with IPv6 Dec 13 13:14:47.351284 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 13:14:47.351291 kernel: NET: Registered PF_PACKET protocol family Dec 13 13:14:47.351297 kernel: Key type dns_resolver registered Dec 13 13:14:47.351304 kernel: registered taskstats version 1 Dec 13 13:14:47.351313 kernel: Loading compiled-in X.509 certificates Dec 13 13:14:47.351320 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: 752b3e36c6039904ea643ccad2b3f5f3cb4ebf78' Dec 13 13:14:47.351327 kernel: Key type .fscrypt registered Dec 13 13:14:47.351333 kernel: Key type fscrypt-provisioning registered Dec 13 13:14:47.351340 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 13:14:47.351348 kernel: ima: Allocated hash algorithm: sha1 Dec 13 13:14:47.351355 kernel: ima: No architecture policies found Dec 13 13:14:47.351362 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 13 13:14:47.351368 kernel: clk: Disabling unused clocks Dec 13 13:14:47.351376 kernel: Freeing unused kernel memory: 39936K Dec 13 13:14:47.351383 kernel: Run /init as init process Dec 13 13:14:47.351390 kernel: with arguments: Dec 13 13:14:47.351397 kernel: /init Dec 13 13:14:47.351403 kernel: with environment: Dec 13 13:14:47.351410 kernel: HOME=/ Dec 13 13:14:47.351417 kernel: TERM=linux Dec 13 13:14:47.351424 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 13:14:47.351433 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 13:14:47.351443 systemd[1]: Detected virtualization microsoft. Dec 13 13:14:47.351451 systemd[1]: Detected architecture arm64. Dec 13 13:14:47.351458 systemd[1]: Running in initrd. Dec 13 13:14:47.351466 systemd[1]: No hostname configured, using default hostname. Dec 13 13:14:47.351473 systemd[1]: Hostname set to . Dec 13 13:14:47.351480 systemd[1]: Initializing machine ID from random generator. Dec 13 13:14:47.351488 systemd[1]: Queued start job for default target initrd.target. Dec 13 13:14:47.351497 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 13:14:47.351505 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 13:14:47.351513 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 13:14:47.351520 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 13:14:47.351528 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 13:14:47.351536 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 13:14:47.351544 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 13:14:47.351554 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 13:14:47.351561 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 13:14:47.351569 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 13:14:47.351576 systemd[1]: Reached target paths.target - Path Units. Dec 13 13:14:47.351583 systemd[1]: Reached target slices.target - Slice Units. Dec 13 13:14:47.351591 systemd[1]: Reached target swap.target - Swaps. Dec 13 13:14:47.351598 systemd[1]: Reached target timers.target - Timer Units. Dec 13 13:14:47.351605 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 13:14:47.351614 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 13:14:47.351622 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 13:14:47.351629 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 13:14:47.351637 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 13:14:47.351644 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 13:14:47.351652 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 13:14:47.351659 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 13:14:47.351667 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 13:14:47.351674 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 13:14:47.351683 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 13:14:47.351690 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 13:14:47.351698 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 13:14:47.351705 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 13:14:47.357390 systemd-journald[218]: Collecting audit messages is disabled. Dec 13 13:14:47.357432 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:14:47.357442 systemd-journald[218]: Journal started Dec 13 13:14:47.357464 systemd-journald[218]: Runtime Journal (/run/log/journal/5b3afe2b946a4c1988276eca0cc9c033) is 8.0M, max 78.5M, 70.5M free. Dec 13 13:14:47.357842 systemd-modules-load[219]: Inserted module 'overlay' Dec 13 13:14:47.373062 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 13:14:47.391748 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 13:14:47.392090 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 13:14:47.409450 kernel: Bridge firewalling registered Dec 13 13:14:47.403178 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 13:14:47.408478 systemd-modules-load[219]: Inserted module 'br_netfilter' Dec 13 13:14:47.417236 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 13:14:47.427646 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 13:14:47.438263 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:14:47.460974 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 13:14:47.477790 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 13:14:47.498864 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 13:14:47.516886 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 13:14:47.526762 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:14:47.542772 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:14:47.567764 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 13:14:47.576747 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 13:14:47.603179 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 13:14:47.618339 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 13:14:47.637449 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 13:14:47.657574 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 13:14:47.678484 dracut-cmdline[250]: dracut-dracut-053 Dec 13 13:14:47.678484 dracut-cmdline[250]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=c48af8adabdaf1d8e07ceb011d2665929c607ddf2c4d40203b31334d745cc472 Dec 13 13:14:47.720951 systemd-resolved[253]: Positive Trust Anchors: Dec 13 13:14:47.720967 systemd-resolved[253]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 13:14:47.720997 systemd-resolved[253]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 13:14:47.723620 systemd-resolved[253]: Defaulting to hostname 'linux'. Dec 13 13:14:47.725812 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 13:14:47.733236 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 13:14:47.838777 kernel: SCSI subsystem initialized Dec 13 13:14:47.847751 kernel: Loading iSCSI transport class v2.0-870. Dec 13 13:14:47.858810 kernel: iscsi: registered transport (tcp) Dec 13 13:14:47.877051 kernel: iscsi: registered transport (qla4xxx) Dec 13 13:14:47.877105 kernel: QLogic iSCSI HBA Driver Dec 13 13:14:47.915400 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 13:14:47.933930 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 13:14:47.974837 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 13:14:47.974895 kernel: device-mapper: uevent: version 1.0.3 Dec 13 13:14:47.982045 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 13:14:48.031750 kernel: raid6: neonx8 gen() 15771 MB/s Dec 13 13:14:48.051739 kernel: raid6: neonx4 gen() 15799 MB/s Dec 13 13:14:48.071736 kernel: raid6: neonx2 gen() 13344 MB/s Dec 13 13:14:48.092742 kernel: raid6: neonx1 gen() 10543 MB/s Dec 13 13:14:48.112735 kernel: raid6: int64x8 gen() 6792 MB/s Dec 13 13:14:48.132736 kernel: raid6: int64x4 gen() 7359 MB/s Dec 13 13:14:48.153737 kernel: raid6: int64x2 gen() 6112 MB/s Dec 13 13:14:48.177264 kernel: raid6: int64x1 gen() 5059 MB/s Dec 13 13:14:48.177277 kernel: raid6: using algorithm neonx4 gen() 15799 MB/s Dec 13 13:14:48.200793 kernel: raid6: .... xor() 12447 MB/s, rmw enabled Dec 13 13:14:48.200805 kernel: raid6: using neon recovery algorithm Dec 13 13:14:48.212530 kernel: xor: measuring software checksum speed Dec 13 13:14:48.212544 kernel: 8regs : 21630 MB/sec Dec 13 13:14:48.215882 kernel: 32regs : 21699 MB/sec Dec 13 13:14:48.219258 kernel: arm64_neon : 28013 MB/sec Dec 13 13:14:48.223600 kernel: xor: using function: arm64_neon (28013 MB/sec) Dec 13 13:14:48.273756 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 13:14:48.284230 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 13:14:48.299882 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 13:14:48.322430 systemd-udevd[436]: Using default interface naming scheme 'v255'. Dec 13 13:14:48.327831 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 13:14:48.345849 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 13:14:48.379120 dracut-pre-trigger[451]: rd.md=0: removing MD RAID activation Dec 13 13:14:48.412197 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 13:14:48.426950 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 13:14:48.464254 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 13:14:48.482901 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 13:14:48.504014 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 13:14:48.520490 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 13:14:48.538782 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 13:14:48.552015 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 13:14:48.580760 kernel: hv_vmbus: Vmbus version:5.3 Dec 13 13:14:48.593020 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 13:14:48.630098 kernel: hv_vmbus: registering driver hyperv_keyboard Dec 13 13:14:48.630123 kernel: hv_vmbus: registering driver hid_hyperv Dec 13 13:14:48.630135 kernel: hv_vmbus: registering driver hv_netvsc Dec 13 13:14:48.630146 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Dec 13 13:14:48.630157 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Dec 13 13:14:48.624040 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 13:14:48.658468 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Dec 13 13:14:48.651342 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 13:14:48.678625 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 13:14:48.678648 kernel: hv_vmbus: registering driver hv_storvsc Dec 13 13:14:48.678657 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 13:14:48.651490 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:14:48.719646 kernel: scsi host1: storvsc_host_t Dec 13 13:14:48.719826 kernel: scsi host0: storvsc_host_t Dec 13 13:14:48.719913 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Dec 13 13:14:48.678919 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 13:14:48.691128 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 13:14:48.752508 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Dec 13 13:14:48.752551 kernel: hv_netvsc 000d3af7-acec-000d-3af7-acec000d3af7 eth0: VF slot 1 added Dec 13 13:14:48.691352 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:14:48.705828 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:14:48.739904 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:14:48.799540 kernel: PTP clock support registered Dec 13 13:14:48.799561 kernel: hv_vmbus: registering driver hv_pci Dec 13 13:14:48.762025 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 13:14:48.817008 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Dec 13 13:14:48.885893 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 13:14:48.885919 kernel: hv_pci b43e206d-cd91-428d-acea-9a73b5bc39c3: PCI VMBus probing: Using version 0x10004 Dec 13 13:14:49.310047 kernel: hv_utils: Registering HyperV Utility Driver Dec 13 13:14:49.310067 kernel: hv_pci b43e206d-cd91-428d-acea-9a73b5bc39c3: PCI host bridge to bus cd91:00 Dec 13 13:14:49.310205 kernel: hv_vmbus: registering driver hv_utils Dec 13 13:14:49.310217 kernel: pci_bus cd91:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Dec 13 13:14:49.310335 kernel: hv_utils: Shutdown IC version 3.2 Dec 13 13:14:49.310346 kernel: pci_bus cd91:00: No busn resource found for root bus, will use [bus 00-ff] Dec 13 13:14:49.310453 kernel: hv_utils: Heartbeat IC version 3.0 Dec 13 13:14:49.310476 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Dec 13 13:14:49.310812 kernel: pci cd91:00:02.0: [15b3:1018] type 00 class 0x020000 Dec 13 13:14:49.310990 kernel: hv_utils: TimeSync IC version 4.0 Dec 13 13:14:49.311002 kernel: pci cd91:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Dec 13 13:14:49.311091 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Dec 13 13:14:49.311207 kernel: pci cd91:00:02.0: enabling Extended Tags Dec 13 13:14:49.311303 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Dec 13 13:14:49.311390 kernel: sd 0:0:0:0: [sda] Write Protect is off Dec 13 13:14:49.311474 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Dec 13 13:14:49.311553 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Dec 13 13:14:49.311633 kernel: pci cd91:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at cd91:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Dec 13 13:14:49.311721 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 13:14:49.311730 kernel: pci_bus cd91:00: busn_res: [bus 00-ff] end is updated to 00 Dec 13 13:14:49.311808 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Dec 13 13:14:49.311885 kernel: pci cd91:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Dec 13 13:14:48.762114 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:14:48.807072 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:14:48.839196 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:14:48.877143 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 13:14:49.218697 systemd-resolved[253]: Clock change detected. Flushing caches. Dec 13 13:14:49.302351 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:14:49.371262 kernel: mlx5_core cd91:00:02.0: enabling device (0000 -> 0002) Dec 13 13:14:49.587580 kernel: mlx5_core cd91:00:02.0: firmware version: 16.30.1284 Dec 13 13:14:49.587701 kernel: hv_netvsc 000d3af7-acec-000d-3af7-acec000d3af7 eth0: VF registering: eth1 Dec 13 13:14:49.587787 kernel: mlx5_core cd91:00:02.0 eth1: joined to eth0 Dec 13 13:14:49.587877 kernel: mlx5_core cd91:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Dec 13 13:14:49.595171 kernel: mlx5_core cd91:00:02.0 enP52625s1: renamed from eth1 Dec 13 13:14:49.821949 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Dec 13 13:14:49.935203 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (500) Dec 13 13:14:49.949176 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Dec 13 13:14:49.973576 kernel: BTRFS: device fsid 47b12626-f7d3-4179-9720-ca262eb4c614 devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (495) Dec 13 13:14:49.985089 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Dec 13 13:14:50.003423 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Dec 13 13:14:50.010608 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Dec 13 13:14:50.042311 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 13:14:50.066206 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 13:14:50.074157 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 13:14:51.084743 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 13:14:51.084800 disk-uuid[603]: The operation has completed successfully. Dec 13 13:14:51.145451 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 13:14:51.145554 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 13:14:51.175276 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 13:14:51.189109 sh[689]: Success Dec 13 13:14:51.229333 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Dec 13 13:14:51.471581 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 13:14:51.484576 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 13:14:51.495280 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 13:14:51.526698 kernel: BTRFS info (device dm-0): first mount of filesystem 47b12626-f7d3-4179-9720-ca262eb4c614 Dec 13 13:14:51.526748 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 13 13:14:51.534290 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 13:14:51.539956 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 13:14:51.544574 kernel: BTRFS info (device dm-0): using free space tree Dec 13 13:14:51.928780 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 13:14:51.934868 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 13:14:51.957375 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 13:14:51.970312 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 13:14:52.002400 kernel: BTRFS info (device sda6): first mount of filesystem d0a3d620-8ab2-45d8-a26c-bb488ffd59f2 Dec 13 13:14:52.002422 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 13:14:52.002438 kernel: BTRFS info (device sda6): using free space tree Dec 13 13:14:52.012262 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 13:14:52.029240 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 13:14:52.034259 kernel: BTRFS info (device sda6): last unmount of filesystem d0a3d620-8ab2-45d8-a26c-bb488ffd59f2 Dec 13 13:14:52.041215 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 13:14:52.057462 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 13:14:52.106442 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 13:14:52.126269 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 13:14:52.152833 systemd-networkd[873]: lo: Link UP Dec 13 13:14:52.152846 systemd-networkd[873]: lo: Gained carrier Dec 13 13:14:52.154423 systemd-networkd[873]: Enumeration completed Dec 13 13:14:52.156722 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 13:14:52.157564 systemd-networkd[873]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:14:52.157567 systemd-networkd[873]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 13:14:52.166562 systemd[1]: Reached target network.target - Network. Dec 13 13:14:52.244153 kernel: mlx5_core cd91:00:02.0 enP52625s1: Link up Dec 13 13:14:52.284157 kernel: hv_netvsc 000d3af7-acec-000d-3af7-acec000d3af7 eth0: Data path switched to VF: enP52625s1 Dec 13 13:14:52.284414 systemd-networkd[873]: enP52625s1: Link UP Dec 13 13:14:52.284646 systemd-networkd[873]: eth0: Link UP Dec 13 13:14:52.284993 systemd-networkd[873]: eth0: Gained carrier Dec 13 13:14:52.285003 systemd-networkd[873]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:14:52.294784 systemd-networkd[873]: enP52625s1: Gained carrier Dec 13 13:14:52.319187 systemd-networkd[873]: eth0: DHCPv4 address 10.200.20.34/24, gateway 10.200.20.1 acquired from 168.63.129.16 Dec 13 13:14:52.920085 ignition[805]: Ignition 2.20.0 Dec 13 13:14:52.920097 ignition[805]: Stage: fetch-offline Dec 13 13:14:52.924585 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 13:14:52.920151 ignition[805]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:14:52.920159 ignition[805]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 13:14:52.920259 ignition[805]: parsed url from cmdline: "" Dec 13 13:14:52.920263 ignition[805]: no config URL provided Dec 13 13:14:52.920267 ignition[805]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 13:14:52.952357 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 13 13:14:52.920274 ignition[805]: no config at "/usr/lib/ignition/user.ign" Dec 13 13:14:52.920278 ignition[805]: failed to fetch config: resource requires networking Dec 13 13:14:52.920448 ignition[805]: Ignition finished successfully Dec 13 13:14:52.977951 ignition[884]: Ignition 2.20.0 Dec 13 13:14:52.977959 ignition[884]: Stage: fetch Dec 13 13:14:52.978158 ignition[884]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:14:52.978168 ignition[884]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 13:14:52.978266 ignition[884]: parsed url from cmdline: "" Dec 13 13:14:52.978269 ignition[884]: no config URL provided Dec 13 13:14:52.978274 ignition[884]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 13:14:52.978281 ignition[884]: no config at "/usr/lib/ignition/user.ign" Dec 13 13:14:52.978306 ignition[884]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Dec 13 13:14:53.066273 ignition[884]: GET result: OK Dec 13 13:14:53.066372 ignition[884]: config has been read from IMDS userdata Dec 13 13:14:53.066412 ignition[884]: parsing config with SHA512: 670c85fc2ed921b3d0f08f163660321975b731cef3f287b83f3d8db020ee05a6fb2b6caa7a3d697df467858db27948c4448f147a2a62deb6953592c1ab276259 Dec 13 13:14:53.071548 unknown[884]: fetched base config from "system" Dec 13 13:14:53.071975 ignition[884]: fetch: fetch complete Dec 13 13:14:53.071558 unknown[884]: fetched base config from "system" Dec 13 13:14:53.071980 ignition[884]: fetch: fetch passed Dec 13 13:14:53.071563 unknown[884]: fetched user config from "azure" Dec 13 13:14:53.072023 ignition[884]: Ignition finished successfully Dec 13 13:14:53.076473 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 13 13:14:53.101258 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 13:14:53.114956 ignition[890]: Ignition 2.20.0 Dec 13 13:14:53.118058 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 13:14:53.114962 ignition[890]: Stage: kargs Dec 13 13:14:53.115227 ignition[890]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:14:53.139321 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 13:14:53.115238 ignition[890]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 13:14:53.116155 ignition[890]: kargs: kargs passed Dec 13 13:14:53.116199 ignition[890]: Ignition finished successfully Dec 13 13:14:53.170461 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 13:14:53.161446 ignition[896]: Ignition 2.20.0 Dec 13 13:14:53.179553 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 13:14:53.161452 ignition[896]: Stage: disks Dec 13 13:14:53.190514 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 13:14:53.161708 ignition[896]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:14:53.200037 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 13:14:53.161718 ignition[896]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 13:14:53.210986 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 13:14:53.167651 ignition[896]: disks: disks passed Dec 13 13:14:53.219442 systemd[1]: Reached target basic.target - Basic System. Dec 13 13:14:53.167702 ignition[896]: Ignition finished successfully Dec 13 13:14:53.246353 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 13:14:53.311344 systemd-fsck[904]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Dec 13 13:14:53.318418 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 13:14:53.340329 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 13:14:53.397147 kernel: EXT4-fs (sda9): mounted filesystem 0aa4851d-a2ba-4d04-90b3-5d00bf608ecc r/w with ordered data mode. Quota mode: none. Dec 13 13:14:53.397787 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 13:14:53.402871 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 13:14:53.448207 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 13:14:53.458830 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 13:14:53.468305 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Dec 13 13:14:53.474488 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 13:14:53.474524 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 13:14:53.501357 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 13:14:53.523385 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (915) Dec 13 13:14:53.522311 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 13:14:53.548400 kernel: BTRFS info (device sda6): first mount of filesystem d0a3d620-8ab2-45d8-a26c-bb488ffd59f2 Dec 13 13:14:53.548433 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 13:14:53.548444 kernel: BTRFS info (device sda6): using free space tree Dec 13 13:14:53.547645 systemd-networkd[873]: eth0: Gained IPv6LL Dec 13 13:14:53.560338 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 13:14:53.561780 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 13:14:53.735223 systemd-networkd[873]: enP52625s1: Gained IPv6LL Dec 13 13:14:54.027515 coreos-metadata[917]: Dec 13 13:14:54.027 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 13 13:14:54.035728 coreos-metadata[917]: Dec 13 13:14:54.035 INFO Fetch successful Dec 13 13:14:54.035728 coreos-metadata[917]: Dec 13 13:14:54.035 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Dec 13 13:14:54.053847 coreos-metadata[917]: Dec 13 13:14:54.053 INFO Fetch successful Dec 13 13:14:54.112177 coreos-metadata[917]: Dec 13 13:14:54.112 INFO wrote hostname ci-4186.0.0-a-128d80e197 to /sysroot/etc/hostname Dec 13 13:14:54.122368 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 13:14:54.318073 initrd-setup-root[945]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 13:14:54.357224 initrd-setup-root[952]: cut: /sysroot/etc/group: No such file or directory Dec 13 13:14:54.376957 initrd-setup-root[959]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 13:14:54.385854 initrd-setup-root[966]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 13:14:55.148976 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 13:14:55.166375 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 13:14:55.180515 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 13:14:55.202461 kernel: BTRFS info (device sda6): last unmount of filesystem d0a3d620-8ab2-45d8-a26c-bb488ffd59f2 Dec 13 13:14:55.199741 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 13:14:55.222985 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 13:14:55.238402 ignition[1036]: INFO : Ignition 2.20.0 Dec 13 13:14:55.244751 ignition[1036]: INFO : Stage: mount Dec 13 13:14:55.244751 ignition[1036]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:14:55.244751 ignition[1036]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 13:14:55.244751 ignition[1036]: INFO : mount: mount passed Dec 13 13:14:55.244751 ignition[1036]: INFO : Ignition finished successfully Dec 13 13:14:55.245669 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 13:14:55.266240 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 13:14:55.281793 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 13:14:55.336716 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1045) Dec 13 13:14:55.336761 kernel: BTRFS info (device sda6): first mount of filesystem d0a3d620-8ab2-45d8-a26c-bb488ffd59f2 Dec 13 13:14:55.343647 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 13:14:55.348690 kernel: BTRFS info (device sda6): using free space tree Dec 13 13:14:55.356147 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 13:14:55.358008 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 13:14:55.386829 ignition[1063]: INFO : Ignition 2.20.0 Dec 13 13:14:55.386829 ignition[1063]: INFO : Stage: files Dec 13 13:14:55.396158 ignition[1063]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:14:55.396158 ignition[1063]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 13:14:55.396158 ignition[1063]: DEBUG : files: compiled without relabeling support, skipping Dec 13 13:14:55.396158 ignition[1063]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 13:14:55.396158 ignition[1063]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 13:14:55.457122 ignition[1063]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 13:14:55.465888 ignition[1063]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 13:14:55.465888 ignition[1063]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 13:14:55.457560 unknown[1063]: wrote ssh authorized keys file for user: core Dec 13 13:14:55.488959 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 13:14:55.488959 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Dec 13 13:14:55.550352 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 13:14:55.743218 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 13:14:55.743218 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 13:14:55.766646 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Dec 13 13:14:56.194525 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 13 13:14:56.257120 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 13:14:56.257120 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 13 13:14:56.278072 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 13:14:56.278072 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 13:14:56.278072 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 13:14:56.278072 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 13:14:56.278072 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 13:14:56.278072 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 13:14:56.278072 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 13:14:56.278072 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 13:14:56.278072 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 13:14:56.278072 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Dec 13 13:14:56.278072 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Dec 13 13:14:56.278072 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Dec 13 13:14:56.278072 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Dec 13 13:14:56.671995 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 13 13:14:56.842771 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Dec 13 13:14:56.842771 ignition[1063]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Dec 13 13:14:56.871848 ignition[1063]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 13:14:56.884397 ignition[1063]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 13:14:56.884397 ignition[1063]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Dec 13 13:14:56.884397 ignition[1063]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Dec 13 13:14:56.884397 ignition[1063]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 13:14:56.884397 ignition[1063]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 13:14:56.884397 ignition[1063]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 13:14:56.884397 ignition[1063]: INFO : files: files passed Dec 13 13:14:56.884397 ignition[1063]: INFO : Ignition finished successfully Dec 13 13:14:56.884243 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 13:14:56.918889 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 13:14:56.943317 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 13:14:56.965978 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 13:14:56.966067 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 13:14:57.008733 initrd-setup-root-after-ignition[1090]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 13:14:57.008733 initrd-setup-root-after-ignition[1090]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 13:14:57.027816 initrd-setup-root-after-ignition[1094]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 13:14:57.018946 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 13:14:57.034796 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 13:14:57.063381 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 13:14:57.094996 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 13:14:57.095180 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 13:14:57.107587 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 13:14:57.120488 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 13:14:57.131794 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 13:14:57.146387 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 13:14:57.165767 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 13:14:57.182612 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 13:14:57.201498 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 13:14:57.208436 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 13:14:57.221376 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 13:14:57.232791 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 13:14:57.232918 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 13:14:57.249264 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 13:14:57.255287 systemd[1]: Stopped target basic.target - Basic System. Dec 13 13:14:57.267164 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 13:14:57.278874 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 13:14:57.290414 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 13:14:57.302686 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 13:14:57.315516 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 13:14:57.329011 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 13:14:57.341962 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 13:14:57.355143 systemd[1]: Stopped target swap.target - Swaps. Dec 13 13:14:57.364995 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 13:14:57.365120 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 13:14:57.380444 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 13:14:57.386729 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 13:14:57.398762 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 13:14:57.398834 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 13:14:57.412014 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 13:14:57.412157 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 13:14:57.430045 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 13:14:57.430197 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 13:14:57.437573 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 13:14:57.437667 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 13:14:57.448458 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 13 13:14:57.448555 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 13:14:57.481423 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 13:14:57.537638 ignition[1115]: INFO : Ignition 2.20.0 Dec 13 13:14:57.537638 ignition[1115]: INFO : Stage: umount Dec 13 13:14:57.537638 ignition[1115]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:14:57.537638 ignition[1115]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 13:14:57.537638 ignition[1115]: INFO : umount: umount passed Dec 13 13:14:57.537638 ignition[1115]: INFO : Ignition finished successfully Dec 13 13:14:57.497381 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 13:14:57.516031 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 13:14:57.516259 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 13:14:57.523615 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 13:14:57.523767 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 13:14:57.549080 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 13:14:57.549213 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 13:14:57.559637 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 13:14:57.559897 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 13:14:57.573393 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 13:14:57.573458 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 13:14:57.592290 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 13:14:57.592358 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 13 13:14:57.602984 systemd[1]: Stopped target network.target - Network. Dec 13 13:14:57.613373 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 13:14:57.613441 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 13:14:57.625827 systemd[1]: Stopped target paths.target - Path Units. Dec 13 13:14:57.637306 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 13:14:57.641159 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 13:14:57.649514 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 13:14:57.661092 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 13:14:57.672649 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 13:14:57.672703 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 13:14:57.683286 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 13:14:57.683327 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 13:14:57.694019 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 13:14:57.694074 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 13:14:57.705846 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 13:14:57.705900 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 13:14:57.717373 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 13:14:57.723542 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 13:14:57.735770 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 13:14:57.736418 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 13:14:57.736516 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 13:14:57.974604 kernel: hv_netvsc 000d3af7-acec-000d-3af7-acec000d3af7 eth0: Data path switched from VF: enP52625s1 Dec 13 13:14:57.747789 systemd-networkd[873]: eth0: DHCPv6 lease lost Dec 13 13:14:57.750908 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 13:14:57.751283 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 13:14:57.765671 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 13:14:57.765822 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 13:14:57.779245 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 13:14:57.779302 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 13:14:57.809352 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 13:14:57.823670 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 13:14:57.823743 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 13:14:57.835855 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 13:14:57.835908 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:14:57.849644 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 13:14:57.849694 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 13:14:57.860667 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 13:14:57.860717 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 13:14:57.872963 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 13:14:57.913810 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 13:14:57.914231 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 13:14:57.926335 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 13:14:57.926387 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 13:14:57.937655 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 13:14:57.937690 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 13:14:57.957266 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 13:14:57.957325 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 13:14:57.974676 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 13:14:57.974740 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 13:14:57.984692 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 13:14:57.984757 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:14:58.025388 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 13:14:58.038424 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 13:14:58.038494 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 13:14:58.051233 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 13 13:14:58.051297 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 13:14:58.070749 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 13:14:58.070816 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 13:14:58.082534 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 13:14:58.082585 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:14:58.094590 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 13:14:58.094699 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 13:14:58.106323 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 13:14:58.313799 systemd-journald[218]: Received SIGTERM from PID 1 (systemd). Dec 13 13:14:58.106420 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 13:14:58.117108 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 13:14:58.117209 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 13:14:58.130981 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 13:14:58.143272 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 13:14:58.143369 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 13:14:58.172387 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 13:14:58.191547 systemd[1]: Switching root. Dec 13 13:14:58.359042 systemd-journald[218]: Journal stopped Dec 13 13:15:03.911304 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 13:15:03.911327 kernel: SELinux: policy capability open_perms=1 Dec 13 13:15:03.911337 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 13:15:03.911345 kernel: SELinux: policy capability always_check_network=0 Dec 13 13:15:03.911354 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 13:15:03.911362 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 13:15:03.911370 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 13:15:03.911378 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 13:15:03.911387 systemd[1]: Successfully loaded SELinux policy in 135.539ms. Dec 13 13:15:03.911396 kernel: audit: type=1403 audit(1734095699.655:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 13:15:03.911406 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.916ms. Dec 13 13:15:03.911416 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 13:15:03.911424 systemd[1]: Detected virtualization microsoft. Dec 13 13:15:03.911434 systemd[1]: Detected architecture arm64. Dec 13 13:15:03.911443 systemd[1]: Detected first boot. Dec 13 13:15:03.911454 systemd[1]: Hostname set to . Dec 13 13:15:03.911463 systemd[1]: Initializing machine ID from random generator. Dec 13 13:15:03.911471 zram_generator::config[1156]: No configuration found. Dec 13 13:15:03.911481 systemd[1]: Populated /etc with preset unit settings. Dec 13 13:15:03.911489 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 13:15:03.911498 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 13:15:03.911506 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 13:15:03.911517 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 13:15:03.911526 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 13:15:03.911535 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 13:15:03.911544 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 13:15:03.911552 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 13:15:03.911561 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 13:15:03.911570 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 13:15:03.911580 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 13:15:03.911594 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 13:15:03.911603 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 13:15:03.911612 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 13:15:03.911621 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 13:15:03.911631 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 13:15:03.911640 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 13:15:03.911648 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Dec 13 13:15:03.911659 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 13:15:03.911668 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 13:15:03.911677 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 13:15:03.911688 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 13:15:03.911697 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 13:15:03.911706 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 13:15:03.911715 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 13:15:03.911724 systemd[1]: Reached target slices.target - Slice Units. Dec 13 13:15:03.911734 systemd[1]: Reached target swap.target - Swaps. Dec 13 13:15:03.911743 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 13:15:03.911752 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 13:15:03.911761 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 13:15:03.911769 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 13:15:03.911779 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 13:15:03.911789 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 13:15:03.911799 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 13:15:03.911808 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 13:15:03.911817 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 13:15:03.911827 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 13:15:03.911836 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 13:15:03.911845 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 13:15:03.911856 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 13:15:03.911865 systemd[1]: Reached target machines.target - Containers. Dec 13 13:15:03.911874 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 13:15:03.911883 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 13:15:03.911893 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 13:15:03.911902 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 13:15:03.911911 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 13:15:03.911920 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 13:15:03.911930 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 13:15:03.911940 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 13:15:03.911949 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 13:15:03.911958 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 13:15:03.911967 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 13:15:03.911976 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 13:15:03.911985 kernel: fuse: init (API version 7.39) Dec 13 13:15:03.911994 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 13:15:03.912004 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 13:15:03.912013 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 13:15:03.912023 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 13:15:03.912032 kernel: loop: module loaded Dec 13 13:15:03.912040 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 13:15:03.912064 systemd-journald[1259]: Collecting audit messages is disabled. Dec 13 13:15:03.912085 systemd-journald[1259]: Journal started Dec 13 13:15:03.912109 systemd-journald[1259]: Runtime Journal (/run/log/journal/65b937df2a914d83b4a6057e38d22b6a) is 8.0M, max 78.5M, 70.5M free. Dec 13 13:15:02.837064 systemd[1]: Queued start job for default target multi-user.target. Dec 13 13:15:03.002786 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Dec 13 13:15:03.003178 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 13:15:03.004397 systemd[1]: systemd-journald.service: Consumed 3.367s CPU time. Dec 13 13:15:03.926955 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 13:15:03.938155 kernel: ACPI: bus type drm_connector registered Dec 13 13:15:03.954381 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 13:15:03.966900 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 13:15:03.966961 systemd[1]: Stopped verity-setup.service. Dec 13 13:15:03.986148 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 13:15:03.986947 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 13:15:03.992981 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 13:15:03.999443 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 13:15:04.005173 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 13:15:04.011506 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 13:15:04.017788 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 13:15:04.023376 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 13:15:04.030251 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 13:15:04.037716 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 13:15:04.037854 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 13:15:04.045001 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 13:15:04.045150 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 13:15:04.051840 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 13:15:04.051976 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 13:15:04.059221 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 13:15:04.059363 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 13:15:04.066422 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 13:15:04.066554 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 13:15:04.073261 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 13:15:04.073397 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 13:15:04.079768 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 13:15:04.086731 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 13:15:04.094269 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 13:15:04.101546 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 13:15:04.119824 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 13:15:04.131230 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 13:15:04.138738 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 13:15:04.145720 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 13:15:04.145759 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 13:15:04.153732 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 13:15:04.162814 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 13:15:04.173341 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 13:15:04.181477 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 13:15:04.185421 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 13:15:04.192796 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 13:15:04.199445 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 13:15:04.201470 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 13:15:04.208104 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 13:15:04.210462 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 13:15:04.224336 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 13:15:04.236079 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 13:15:04.244912 systemd-journald[1259]: Time spent on flushing to /var/log/journal/65b937df2a914d83b4a6057e38d22b6a is 12.390ms for 905 entries. Dec 13 13:15:04.244912 systemd-journald[1259]: System Journal (/var/log/journal/65b937df2a914d83b4a6057e38d22b6a) is 8.0M, max 2.6G, 2.6G free. Dec 13 13:15:04.282043 systemd-journald[1259]: Received client request to flush runtime journal. Dec 13 13:15:04.246338 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 13:15:04.267906 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 13:15:04.276247 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 13:15:04.284477 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 13:15:04.291706 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 13:15:04.305551 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 13:15:04.308146 kernel: loop0: detected capacity change from 0 to 116784 Dec 13 13:15:04.317796 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 13:15:04.332843 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 13:15:04.340156 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:15:04.347937 udevadm[1293]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 13:15:04.378689 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 13:15:04.379882 systemd-tmpfiles[1292]: ACLs are not supported, ignoring. Dec 13 13:15:04.380216 systemd-tmpfiles[1292]: ACLs are not supported, ignoring. Dec 13 13:15:04.380819 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 13:15:04.388692 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 13:15:04.402366 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 13:15:04.542790 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 13:15:04.556332 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 13:15:04.577544 systemd-tmpfiles[1310]: ACLs are not supported, ignoring. Dec 13 13:15:04.577562 systemd-tmpfiles[1310]: ACLs are not supported, ignoring. Dec 13 13:15:04.581685 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 13:15:04.824217 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 13:15:04.882149 kernel: loop1: detected capacity change from 0 to 28752 Dec 13 13:15:05.244179 kernel: loop2: detected capacity change from 0 to 194096 Dec 13 13:15:05.292633 kernel: loop3: detected capacity change from 0 to 113552 Dec 13 13:15:05.311567 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 13:15:05.323395 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 13:15:05.350124 systemd-udevd[1318]: Using default interface naming scheme 'v255'. Dec 13 13:15:05.664159 kernel: loop4: detected capacity change from 0 to 116784 Dec 13 13:15:05.675142 kernel: loop5: detected capacity change from 0 to 28752 Dec 13 13:15:05.684163 kernel: loop6: detected capacity change from 0 to 194096 Dec 13 13:15:05.694143 kernel: loop7: detected capacity change from 0 to 113552 Dec 13 13:15:05.697409 (sd-merge)[1320]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Dec 13 13:15:05.697828 (sd-merge)[1320]: Merged extensions into '/usr'. Dec 13 13:15:05.701406 systemd[1]: Reloading requested from client PID 1291 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 13:15:05.701524 systemd[1]: Reloading... Dec 13 13:15:05.762209 zram_generator::config[1346]: No configuration found. Dec 13 13:15:05.887247 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1416) Dec 13 13:15:05.906259 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1416) Dec 13 13:15:05.924116 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:15:05.995751 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 13:15:05.996524 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Dec 13 13:15:05.996809 systemd[1]: Reloading finished in 294 ms. Dec 13 13:15:06.037195 kernel: hv_vmbus: registering driver hv_balloon Dec 13 13:15:06.037291 kernel: hv_vmbus: registering driver hyperv_fb Dec 13 13:15:06.037320 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Dec 13 13:15:06.050302 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Dec 13 13:15:06.051786 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 13:15:06.065170 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Dec 13 13:15:06.065299 kernel: hv_balloon: Memory hot add disabled on ARM64 Dec 13 13:15:06.077732 kernel: Console: switching to colour dummy device 80x25 Dec 13 13:15:06.078499 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 13:15:06.090828 kernel: Console: switching to colour frame buffer device 128x48 Dec 13 13:15:06.115353 systemd[1]: Starting ensure-sysext.service... Dec 13 13:15:06.129302 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 13:15:06.150172 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1418) Dec 13 13:15:06.153674 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 13:15:06.168643 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:15:06.205701 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 13:15:06.215322 systemd[1]: Reloading requested from client PID 1445 ('systemctl') (unit ensure-sysext.service)... Dec 13 13:15:06.215334 systemd[1]: Reloading... Dec 13 13:15:06.255796 systemd-tmpfiles[1458]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 13:15:06.256011 systemd-tmpfiles[1458]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 13:15:06.258883 systemd-tmpfiles[1458]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 13:15:06.259093 systemd-tmpfiles[1458]: ACLs are not supported, ignoring. Dec 13 13:15:06.260221 systemd-tmpfiles[1458]: ACLs are not supported, ignoring. Dec 13 13:15:06.292073 systemd-tmpfiles[1458]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 13:15:06.292241 systemd-tmpfiles[1458]: Skipping /boot Dec 13 13:15:06.302155 systemd-tmpfiles[1458]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 13:15:06.302857 systemd-tmpfiles[1458]: Skipping /boot Dec 13 13:15:06.322618 zram_generator::config[1536]: No configuration found. Dec 13 13:15:06.443552 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:15:06.451751 systemd-networkd[1451]: lo: Link UP Dec 13 13:15:06.451762 systemd-networkd[1451]: lo: Gained carrier Dec 13 13:15:06.454237 systemd-networkd[1451]: Enumeration completed Dec 13 13:15:06.454682 systemd-networkd[1451]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:15:06.454763 systemd-networkd[1451]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 13:15:06.501148 kernel: mlx5_core cd91:00:02.0 enP52625s1: Link up Dec 13 13:15:06.526407 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Dec 13 13:15:06.528148 kernel: hv_netvsc 000d3af7-acec-000d-3af7-acec000d3af7 eth0: Data path switched to VF: enP52625s1 Dec 13 13:15:06.534322 systemd[1]: Reloading finished in 318 ms. Dec 13 13:15:06.535477 systemd-networkd[1451]: enP52625s1: Link UP Dec 13 13:15:06.535614 systemd-networkd[1451]: eth0: Link UP Dec 13 13:15:06.535617 systemd-networkd[1451]: eth0: Gained carrier Dec 13 13:15:06.535638 systemd-networkd[1451]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:15:06.540434 systemd-networkd[1451]: enP52625s1: Gained carrier Dec 13 13:15:06.548870 systemd-networkd[1451]: eth0: DHCPv4 address 10.200.20.34/24, gateway 10.200.20.1 acquired from 168.63.129.16 Dec 13 13:15:06.551584 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 13:15:06.559395 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 13:15:06.577771 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 13:15:06.601735 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 13:15:06.627441 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 13 13:15:06.653629 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 13:15:06.660796 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 13:15:06.662106 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 13:15:06.671428 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 13:15:06.679406 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 13:15:06.687494 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 13:15:06.693716 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 13:15:06.702270 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 13:15:06.715253 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 13:15:06.732440 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 13:15:06.743949 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 13:15:06.753046 lvm[1610]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 13:15:06.762505 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 13:15:06.769692 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 13:15:06.769861 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:15:06.777320 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:15:06.784866 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:15:06.797302 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 13:15:06.807366 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 13:15:06.807506 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 13:15:06.814990 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 13:15:06.815165 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 13:15:06.824068 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 13:15:06.824238 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 13:15:06.833005 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 13:15:06.841387 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:15:06.869279 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 13:15:06.878787 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 13:15:06.884480 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 13:15:06.894541 augenrules[1646]: No rules Dec 13 13:15:06.895091 lvm[1644]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 13:15:06.905585 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 13:15:06.915431 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 13:15:06.926434 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 13:15:06.943978 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 13:15:06.949750 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 13:15:06.949936 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 13:15:06.956798 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 13:15:06.958239 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 13 13:15:06.964423 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 13:15:06.972989 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 13:15:06.980621 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 13:15:06.987978 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 13:15:06.995540 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 13:15:06.995678 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 13:15:07.002484 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 13:15:07.002602 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 13:15:07.009117 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 13:15:07.009255 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 13:15:07.016188 systemd-resolved[1624]: Positive Trust Anchors: Dec 13 13:15:07.016202 systemd-resolved[1624]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 13:15:07.016233 systemd-resolved[1624]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 13:15:07.016546 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 13:15:07.016655 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 13:15:07.024914 systemd[1]: Finished ensure-sysext.service. Dec 13 13:15:07.035872 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 13:15:07.035954 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 13:15:07.035983 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 13:15:07.057215 systemd-resolved[1624]: Using system hostname 'ci-4186.0.0-a-128d80e197'. Dec 13 13:15:07.058773 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 13:15:07.065213 systemd[1]: Reached target network.target - Network. Dec 13 13:15:07.070231 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 13:15:08.327259 systemd-networkd[1451]: enP52625s1: Gained IPv6LL Dec 13 13:15:08.455289 systemd-networkd[1451]: eth0: Gained IPv6LL Dec 13 13:15:08.458188 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 13:15:08.466499 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 13:15:10.032835 ldconfig[1285]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 13:15:10.043636 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 13:15:10.055288 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 13:15:10.069588 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 13:15:10.076114 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 13:15:10.082400 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 13:15:10.089607 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 13:15:10.101086 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 13:15:10.107291 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 13:15:10.114627 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 13:15:10.122154 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 13:15:10.122195 systemd[1]: Reached target paths.target - Path Units. Dec 13 13:15:10.127065 systemd[1]: Reached target timers.target - Timer Units. Dec 13 13:15:10.133059 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 13:15:10.140504 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 13:15:10.149756 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 13:15:10.156350 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 13:15:10.162306 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 13:15:10.167396 systemd[1]: Reached target basic.target - Basic System. Dec 13 13:15:10.172585 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 13:15:10.172614 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 13:15:10.183221 systemd[1]: Starting chronyd.service - NTP client/server... Dec 13 13:15:10.193282 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 13:15:10.204316 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 13 13:15:10.212822 (chronyd)[1671]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Dec 13 13:15:10.214044 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 13:15:10.220367 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 13:15:10.229679 jq[1678]: false Dec 13 13:15:10.230368 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 13:15:10.236146 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 13:15:10.236194 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Dec 13 13:15:10.237947 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Dec 13 13:15:10.248390 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Dec 13 13:15:10.250439 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:15:10.254504 KVP[1680]: KVP starting; pid is:1680 Dec 13 13:15:10.257406 chronyd[1684]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Dec 13 13:15:10.263895 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 13:15:10.272280 kernel: hv_utils: KVP IC version 4.0 Dec 13 13:15:10.272113 KVP[1680]: KVP LIC Version: 3.1 Dec 13 13:15:10.275488 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 13:15:10.289202 chronyd[1684]: Timezone right/UTC failed leap second check, ignoring Dec 13 13:15:10.289421 chronyd[1684]: Loaded seccomp filter (level 2) Dec 13 13:15:10.290348 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 13:15:10.299430 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 13:15:10.308333 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 13:15:10.319709 extend-filesystems[1679]: Found loop4 Dec 13 13:15:10.325631 extend-filesystems[1679]: Found loop5 Dec 13 13:15:10.325631 extend-filesystems[1679]: Found loop6 Dec 13 13:15:10.325631 extend-filesystems[1679]: Found loop7 Dec 13 13:15:10.325631 extend-filesystems[1679]: Found sda Dec 13 13:15:10.325631 extend-filesystems[1679]: Found sda1 Dec 13 13:15:10.325631 extend-filesystems[1679]: Found sda2 Dec 13 13:15:10.325631 extend-filesystems[1679]: Found sda3 Dec 13 13:15:10.325631 extend-filesystems[1679]: Found usr Dec 13 13:15:10.325631 extend-filesystems[1679]: Found sda4 Dec 13 13:15:10.325631 extend-filesystems[1679]: Found sda6 Dec 13 13:15:10.325631 extend-filesystems[1679]: Found sda7 Dec 13 13:15:10.325631 extend-filesystems[1679]: Found sda9 Dec 13 13:15:10.325631 extend-filesystems[1679]: Checking size of /dev/sda9 Dec 13 13:15:10.325440 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 13:15:10.415379 dbus-daemon[1674]: [system] SELinux support is enabled Dec 13 13:15:10.500851 extend-filesystems[1679]: Old size kept for /dev/sda9 Dec 13 13:15:10.500851 extend-filesystems[1679]: Found sr0 Dec 13 13:15:10.335721 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 13:15:10.543723 coreos-metadata[1673]: Dec 13 13:15:10.510 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 13 13:15:10.543723 coreos-metadata[1673]: Dec 13 13:15:10.520 INFO Fetch successful Dec 13 13:15:10.543723 coreos-metadata[1673]: Dec 13 13:15:10.520 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Dec 13 13:15:10.543723 coreos-metadata[1673]: Dec 13 13:15:10.525 INFO Fetch successful Dec 13 13:15:10.543723 coreos-metadata[1673]: Dec 13 13:15:10.526 INFO Fetching http://168.63.129.16/machine/d7062142-9702-499e-bff6-39b9b14ac147/9392783b%2Dacfc%2D4040%2Da89c%2D8727f99a509e.%5Fci%2D4186.0.0%2Da%2D128d80e197?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Dec 13 13:15:10.543723 coreos-metadata[1673]: Dec 13 13:15:10.535 INFO Fetch successful Dec 13 13:15:10.543723 coreos-metadata[1673]: Dec 13 13:15:10.535 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Dec 13 13:15:10.337509 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 13:15:10.345457 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 13:15:10.375470 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 13:15:10.550720 update_engine[1701]: I20241213 13:15:10.434531 1701 main.cc:92] Flatcar Update Engine starting Dec 13 13:15:10.550720 update_engine[1701]: I20241213 13:15:10.435791 1701 update_check_scheduler.cc:74] Next update check in 8m39s Dec 13 13:15:10.388329 systemd[1]: Started chronyd.service - NTP client/server. Dec 13 13:15:10.550982 jq[1708]: true Dec 13 13:15:10.413449 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 13:15:10.413656 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 13:15:10.413915 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 13:15:10.414048 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 13:15:10.430662 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 13:15:10.455559 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 13:15:10.455780 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 13:15:10.475096 systemd-logind[1699]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Dec 13 13:15:10.475432 systemd-logind[1699]: New seat seat0. Dec 13 13:15:10.481344 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 13:15:10.516174 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 13:15:10.549566 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 13:15:10.551169 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 13:15:10.563499 coreos-metadata[1673]: Dec 13 13:15:10.559 INFO Fetch successful Dec 13 13:15:10.587227 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1729) Dec 13 13:15:10.590009 (ntainerd)[1734]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 13:15:10.600200 jq[1733]: true Dec 13 13:15:10.620338 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 13:15:10.620482 dbus-daemon[1674]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 13 13:15:10.620378 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 13:15:10.621240 tar[1730]: linux-arm64/helm Dec 13 13:15:10.635581 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 13:15:10.635608 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 13:15:10.651168 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 13 13:15:10.669779 systemd[1]: Started update-engine.service - Update Engine. Dec 13 13:15:10.692671 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 13:15:10.699492 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 13:15:10.857672 bash[1806]: Updated "/home/core/.ssh/authorized_keys" Dec 13 13:15:10.859070 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 13:15:10.874213 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 13 13:15:11.004660 locksmithd[1790]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 13:15:11.220764 tar[1730]: linux-arm64/LICENSE Dec 13 13:15:11.220764 tar[1730]: linux-arm64/README.md Dec 13 13:15:11.227427 containerd[1734]: time="2024-12-13T13:15:11.227329900Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Dec 13 13:15:11.236712 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 13:15:11.278186 containerd[1734]: time="2024-12-13T13:15:11.278100740Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:15:11.281279 containerd[1734]: time="2024-12-13T13:15:11.281233980Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:15:11.281322 containerd[1734]: time="2024-12-13T13:15:11.281278140Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 13:15:11.281322 containerd[1734]: time="2024-12-13T13:15:11.281298580Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 13:15:11.281483 containerd[1734]: time="2024-12-13T13:15:11.281460620Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 13:15:11.281511 containerd[1734]: time="2024-12-13T13:15:11.281484740Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 13:15:11.282396 containerd[1734]: time="2024-12-13T13:15:11.281546940Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:15:11.282396 containerd[1734]: time="2024-12-13T13:15:11.281567420Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:15:11.282396 containerd[1734]: time="2024-12-13T13:15:11.281733300Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:15:11.282396 containerd[1734]: time="2024-12-13T13:15:11.281749180Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 13:15:11.282396 containerd[1734]: time="2024-12-13T13:15:11.281761820Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:15:11.282396 containerd[1734]: time="2024-12-13T13:15:11.281770700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 13:15:11.282396 containerd[1734]: time="2024-12-13T13:15:11.281838820Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:15:11.282396 containerd[1734]: time="2024-12-13T13:15:11.282020260Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:15:11.282396 containerd[1734]: time="2024-12-13T13:15:11.282109540Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:15:11.282396 containerd[1734]: time="2024-12-13T13:15:11.282122700Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 13:15:11.282396 containerd[1734]: time="2024-12-13T13:15:11.282224820Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 13:15:11.282596 containerd[1734]: time="2024-12-13T13:15:11.282264540Z" level=info msg="metadata content store policy set" policy=shared Dec 13 13:15:11.294710 containerd[1734]: time="2024-12-13T13:15:11.294664660Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 13:15:11.294847 containerd[1734]: time="2024-12-13T13:15:11.294820940Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 13:15:11.294873 containerd[1734]: time="2024-12-13T13:15:11.294851020Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 13:15:11.294890 containerd[1734]: time="2024-12-13T13:15:11.294870500Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 13:15:11.294890 containerd[1734]: time="2024-12-13T13:15:11.294885620Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 13:15:11.295089 containerd[1734]: time="2024-12-13T13:15:11.295064780Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 13:15:11.295351 containerd[1734]: time="2024-12-13T13:15:11.295332260Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 13:15:11.295462 containerd[1734]: time="2024-12-13T13:15:11.295441420Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 13:15:11.295491 containerd[1734]: time="2024-12-13T13:15:11.295462380Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 13:15:11.295491 containerd[1734]: time="2024-12-13T13:15:11.295478140Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 13:15:11.295533 containerd[1734]: time="2024-12-13T13:15:11.295491620Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 13:15:11.295533 containerd[1734]: time="2024-12-13T13:15:11.295504900Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 13:15:11.295533 containerd[1734]: time="2024-12-13T13:15:11.295517060Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 13:15:11.295533 containerd[1734]: time="2024-12-13T13:15:11.295530580Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 13:15:11.295627 containerd[1734]: time="2024-12-13T13:15:11.295550220Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 13:15:11.295627 containerd[1734]: time="2024-12-13T13:15:11.295564740Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 13:15:11.295627 containerd[1734]: time="2024-12-13T13:15:11.295575500Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 13:15:11.295627 containerd[1734]: time="2024-12-13T13:15:11.295587740Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 13:15:11.295627 containerd[1734]: time="2024-12-13T13:15:11.295606820Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 13:15:11.295627 containerd[1734]: time="2024-12-13T13:15:11.295619980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 13:15:11.295725 containerd[1734]: time="2024-12-13T13:15:11.295632260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 13:15:11.295725 containerd[1734]: time="2024-12-13T13:15:11.295644060Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 13:15:11.295725 containerd[1734]: time="2024-12-13T13:15:11.295655740Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 13:15:11.295725 containerd[1734]: time="2024-12-13T13:15:11.295667940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 13:15:11.295725 containerd[1734]: time="2024-12-13T13:15:11.295679500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 13:15:11.295725 containerd[1734]: time="2024-12-13T13:15:11.295690980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 13:15:11.295725 containerd[1734]: time="2024-12-13T13:15:11.295702540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 13:15:11.295725 containerd[1734]: time="2024-12-13T13:15:11.295716500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 13:15:11.295725 containerd[1734]: time="2024-12-13T13:15:11.295727900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 13:15:11.295868 containerd[1734]: time="2024-12-13T13:15:11.295739180Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 13:15:11.295868 containerd[1734]: time="2024-12-13T13:15:11.295750940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 13:15:11.295868 containerd[1734]: time="2024-12-13T13:15:11.295764380Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 13:15:11.295868 containerd[1734]: time="2024-12-13T13:15:11.295784620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 13:15:11.295868 containerd[1734]: time="2024-12-13T13:15:11.295804140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 13:15:11.295868 containerd[1734]: time="2024-12-13T13:15:11.295816500Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 13:15:11.295868 containerd[1734]: time="2024-12-13T13:15:11.295865220Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 13:15:11.295984 containerd[1734]: time="2024-12-13T13:15:11.295882460Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 13:15:11.295984 containerd[1734]: time="2024-12-13T13:15:11.295892740Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 13:15:11.295984 containerd[1734]: time="2024-12-13T13:15:11.295903500Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 13:15:11.295984 containerd[1734]: time="2024-12-13T13:15:11.295912580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 13:15:11.295984 containerd[1734]: time="2024-12-13T13:15:11.295929660Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 13:15:11.295984 containerd[1734]: time="2024-12-13T13:15:11.295939220Z" level=info msg="NRI interface is disabled by configuration." Dec 13 13:15:11.295984 containerd[1734]: time="2024-12-13T13:15:11.295948780Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 13:15:11.296883 containerd[1734]: time="2024-12-13T13:15:11.296237620Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 13:15:11.296883 containerd[1734]: time="2024-12-13T13:15:11.296291860Z" level=info msg="Connect containerd service" Dec 13 13:15:11.296883 containerd[1734]: time="2024-12-13T13:15:11.296327300Z" level=info msg="using legacy CRI server" Dec 13 13:15:11.296883 containerd[1734]: time="2024-12-13T13:15:11.296333860Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 13:15:11.296883 containerd[1734]: time="2024-12-13T13:15:11.296441900Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 13:15:11.297110 containerd[1734]: time="2024-12-13T13:15:11.297029340Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 13:15:11.299135 containerd[1734]: time="2024-12-13T13:15:11.297672100Z" level=info msg="Start subscribing containerd event" Dec 13 13:15:11.299135 containerd[1734]: time="2024-12-13T13:15:11.297721860Z" level=info msg="Start recovering state" Dec 13 13:15:11.299135 containerd[1734]: time="2024-12-13T13:15:11.298254500Z" level=info msg="Start event monitor" Dec 13 13:15:11.299135 containerd[1734]: time="2024-12-13T13:15:11.298276780Z" level=info msg="Start snapshots syncer" Dec 13 13:15:11.299135 containerd[1734]: time="2024-12-13T13:15:11.298285900Z" level=info msg="Start cni network conf syncer for default" Dec 13 13:15:11.299135 containerd[1734]: time="2024-12-13T13:15:11.298293020Z" level=info msg="Start streaming server" Dec 13 13:15:11.299589 containerd[1734]: time="2024-12-13T13:15:11.299544940Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 13:15:11.299635 containerd[1734]: time="2024-12-13T13:15:11.299617580Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 13:15:11.307303 containerd[1734]: time="2024-12-13T13:15:11.304860860Z" level=info msg="containerd successfully booted in 0.082305s" Dec 13 13:15:11.304918 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 13:15:11.435994 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:15:11.447799 (kubelet)[1832]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 13:15:11.476575 sshd_keygen[1706]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 13:15:11.495204 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 13:15:11.506809 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 13:15:11.514545 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Dec 13 13:15:11.527080 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 13:15:11.527912 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 13:15:11.545258 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 13:15:11.561343 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Dec 13 13:15:11.570299 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 13:15:11.583573 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 13:15:11.598577 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Dec 13 13:15:11.605999 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 13:15:11.613433 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 13:15:11.621773 systemd[1]: Startup finished in 676ms (kernel) + 12.436s (initrd) + 12.100s (userspace) = 25.213s. Dec 13 13:15:11.649472 agetty[1859]: failed to open credentials directory Dec 13 13:15:11.649991 agetty[1856]: failed to open credentials directory Dec 13 13:15:11.948832 login[1856]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Dec 13 13:15:11.950540 login[1859]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:15:11.959410 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 13:15:11.967432 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 13:15:11.969966 systemd-logind[1699]: New session 1 of user core. Dec 13 13:15:11.982815 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 13:15:11.988678 kubelet[1832]: E1213 13:15:11.988625 1832 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:15:11.990728 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 13:15:11.990961 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:15:11.991088 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:15:11.995646 (systemd)[1869]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 13:15:12.208087 systemd[1869]: Queued start job for default target default.target. Dec 13 13:15:12.213063 systemd[1869]: Created slice app.slice - User Application Slice. Dec 13 13:15:12.213143 systemd[1869]: Reached target paths.target - Paths. Dec 13 13:15:12.213168 systemd[1869]: Reached target timers.target - Timers. Dec 13 13:15:12.214434 systemd[1869]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 13:15:12.225329 systemd[1869]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 13:15:12.225443 systemd[1869]: Reached target sockets.target - Sockets. Dec 13 13:15:12.225455 systemd[1869]: Reached target basic.target - Basic System. Dec 13 13:15:12.225491 systemd[1869]: Reached target default.target - Main User Target. Dec 13 13:15:12.225516 systemd[1869]: Startup finished in 222ms. Dec 13 13:15:12.225680 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 13:15:12.231333 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 13:15:12.949398 login[1856]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:15:12.953637 systemd-logind[1699]: New session 2 of user core. Dec 13 13:15:12.958287 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 13:15:13.294982 waagent[1852]: 2024-12-13T13:15:13.294839Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Dec 13 13:15:13.300962 waagent[1852]: 2024-12-13T13:15:13.300886Z INFO Daemon Daemon OS: flatcar 4186.0.0 Dec 13 13:15:13.306231 waagent[1852]: 2024-12-13T13:15:13.306172Z INFO Daemon Daemon Python: 3.11.10 Dec 13 13:15:13.311455 waagent[1852]: 2024-12-13T13:15:13.311245Z INFO Daemon Daemon Run daemon Dec 13 13:15:13.315743 waagent[1852]: 2024-12-13T13:15:13.315676Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4186.0.0' Dec 13 13:15:13.325390 waagent[1852]: 2024-12-13T13:15:13.325329Z INFO Daemon Daemon Using waagent for provisioning Dec 13 13:15:13.330687 waagent[1852]: 2024-12-13T13:15:13.330642Z INFO Daemon Daemon Activate resource disk Dec 13 13:15:13.335457 waagent[1852]: 2024-12-13T13:15:13.335409Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Dec 13 13:15:13.348010 waagent[1852]: 2024-12-13T13:15:13.347948Z INFO Daemon Daemon Found device: None Dec 13 13:15:13.352565 waagent[1852]: 2024-12-13T13:15:13.352517Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Dec 13 13:15:13.360825 waagent[1852]: 2024-12-13T13:15:13.360774Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Dec 13 13:15:13.372404 waagent[1852]: 2024-12-13T13:15:13.372351Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 13 13:15:13.378623 waagent[1852]: 2024-12-13T13:15:13.378573Z INFO Daemon Daemon Running default provisioning handler Dec 13 13:15:13.392377 waagent[1852]: 2024-12-13T13:15:13.391743Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Dec 13 13:15:13.407171 waagent[1852]: 2024-12-13T13:15:13.407086Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Dec 13 13:15:13.417303 waagent[1852]: 2024-12-13T13:15:13.417241Z INFO Daemon Daemon cloud-init is enabled: False Dec 13 13:15:13.422261 waagent[1852]: 2024-12-13T13:15:13.422210Z INFO Daemon Daemon Copying ovf-env.xml Dec 13 13:15:13.495994 waagent[1852]: 2024-12-13T13:15:13.495880Z INFO Daemon Daemon Successfully mounted dvd Dec 13 13:15:13.510281 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Dec 13 13:15:13.512341 waagent[1852]: 2024-12-13T13:15:13.512258Z INFO Daemon Daemon Detect protocol endpoint Dec 13 13:15:13.517226 waagent[1852]: 2024-12-13T13:15:13.517161Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 13 13:15:13.522990 waagent[1852]: 2024-12-13T13:15:13.522917Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Dec 13 13:15:13.529971 waagent[1852]: 2024-12-13T13:15:13.529912Z INFO Daemon Daemon Test for route to 168.63.129.16 Dec 13 13:15:13.535382 waagent[1852]: 2024-12-13T13:15:13.535324Z INFO Daemon Daemon Route to 168.63.129.16 exists Dec 13 13:15:13.540565 waagent[1852]: 2024-12-13T13:15:13.540502Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Dec 13 13:15:13.588522 waagent[1852]: 2024-12-13T13:15:13.588434Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Dec 13 13:15:13.595197 waagent[1852]: 2024-12-13T13:15:13.595164Z INFO Daemon Daemon Wire protocol version:2012-11-30 Dec 13 13:15:13.600369 waagent[1852]: 2024-12-13T13:15:13.600317Z INFO Daemon Daemon Server preferred version:2015-04-05 Dec 13 13:15:13.782776 waagent[1852]: 2024-12-13T13:15:13.782672Z INFO Daemon Daemon Initializing goal state during protocol detection Dec 13 13:15:13.789299 waagent[1852]: 2024-12-13T13:15:13.789229Z INFO Daemon Daemon Forcing an update of the goal state. Dec 13 13:15:13.801174 waagent[1852]: 2024-12-13T13:15:13.801094Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Dec 13 13:15:13.821468 waagent[1852]: 2024-12-13T13:15:13.821421Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.159 Dec 13 13:15:13.827485 waagent[1852]: 2024-12-13T13:15:13.827434Z INFO Daemon Dec 13 13:15:13.830263 waagent[1852]: 2024-12-13T13:15:13.830218Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: cef313e0-b497-4d76-bea2-96825502703f eTag: 3783203114245227424 source: Fabric] Dec 13 13:15:13.841787 waagent[1852]: 2024-12-13T13:15:13.841707Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Dec 13 13:15:13.848878 waagent[1852]: 2024-12-13T13:15:13.848825Z INFO Daemon Dec 13 13:15:13.851927 waagent[1852]: 2024-12-13T13:15:13.851875Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Dec 13 13:15:13.863454 waagent[1852]: 2024-12-13T13:15:13.863412Z INFO Daemon Daemon Downloading artifacts profile blob Dec 13 13:15:13.952750 waagent[1852]: 2024-12-13T13:15:13.952657Z INFO Daemon Downloaded certificate {'thumbprint': 'EE38BEDBA8E99CC9292312CDE963D9B4AFF20E99', 'hasPrivateKey': True} Dec 13 13:15:13.963281 waagent[1852]: 2024-12-13T13:15:13.963227Z INFO Daemon Downloaded certificate {'thumbprint': '6CC767F6718702FF7049536CA308DA7FCFE1E61D', 'hasPrivateKey': False} Dec 13 13:15:13.973340 waagent[1852]: 2024-12-13T13:15:13.973285Z INFO Daemon Fetch goal state completed Dec 13 13:15:13.984792 waagent[1852]: 2024-12-13T13:15:13.984725Z INFO Daemon Daemon Starting provisioning Dec 13 13:15:13.989853 waagent[1852]: 2024-12-13T13:15:13.989797Z INFO Daemon Daemon Handle ovf-env.xml. Dec 13 13:15:13.994531 waagent[1852]: 2024-12-13T13:15:13.994483Z INFO Daemon Daemon Set hostname [ci-4186.0.0-a-128d80e197] Dec 13 13:15:14.048007 waagent[1852]: 2024-12-13T13:15:14.047927Z INFO Daemon Daemon Publish hostname [ci-4186.0.0-a-128d80e197] Dec 13 13:15:14.054547 waagent[1852]: 2024-12-13T13:15:14.054477Z INFO Daemon Daemon Examine /proc/net/route for primary interface Dec 13 13:15:14.060931 waagent[1852]: 2024-12-13T13:15:14.060870Z INFO Daemon Daemon Primary interface is [eth0] Dec 13 13:15:14.125776 systemd-networkd[1451]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:15:14.125784 systemd-networkd[1451]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 13:15:14.125811 systemd-networkd[1451]: eth0: DHCP lease lost Dec 13 13:15:14.127093 waagent[1852]: 2024-12-13T13:15:14.127012Z INFO Daemon Daemon Create user account if not exists Dec 13 13:15:14.132872 waagent[1852]: 2024-12-13T13:15:14.132808Z INFO Daemon Daemon User core already exists, skip useradd Dec 13 13:15:14.138951 waagent[1852]: 2024-12-13T13:15:14.138893Z INFO Daemon Daemon Configure sudoer Dec 13 13:15:14.144466 waagent[1852]: 2024-12-13T13:15:14.144400Z INFO Daemon Daemon Configure sshd Dec 13 13:15:14.148954 waagent[1852]: 2024-12-13T13:15:14.148895Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Dec 13 13:15:14.149020 systemd-networkd[1451]: eth0: DHCPv6 lease lost Dec 13 13:15:14.162581 waagent[1852]: 2024-12-13T13:15:14.162500Z INFO Daemon Daemon Deploy ssh public key. Dec 13 13:15:14.179178 systemd-networkd[1451]: eth0: DHCPv4 address 10.200.20.34/24, gateway 10.200.20.1 acquired from 168.63.129.16 Dec 13 13:15:15.299742 waagent[1852]: 2024-12-13T13:15:15.299687Z INFO Daemon Daemon Provisioning complete Dec 13 13:15:15.319411 waagent[1852]: 2024-12-13T13:15:15.319359Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Dec 13 13:15:15.326134 waagent[1852]: 2024-12-13T13:15:15.326054Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Dec 13 13:15:15.336598 waagent[1852]: 2024-12-13T13:15:15.336533Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Dec 13 13:15:15.473175 waagent[1924]: 2024-12-13T13:15:15.472904Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Dec 13 13:15:15.473175 waagent[1924]: 2024-12-13T13:15:15.473054Z INFO ExtHandler ExtHandler OS: flatcar 4186.0.0 Dec 13 13:15:15.473175 waagent[1924]: 2024-12-13T13:15:15.473106Z INFO ExtHandler ExtHandler Python: 3.11.10 Dec 13 13:15:15.494365 waagent[1924]: 2024-12-13T13:15:15.494272Z INFO ExtHandler ExtHandler Distro: flatcar-4186.0.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.10; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Dec 13 13:15:15.494568 waagent[1924]: 2024-12-13T13:15:15.494526Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 13:15:15.494637 waagent[1924]: 2024-12-13T13:15:15.494603Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 13:15:15.503268 waagent[1924]: 2024-12-13T13:15:15.503197Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Dec 13 13:15:15.510935 waagent[1924]: 2024-12-13T13:15:15.510890Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.159 Dec 13 13:15:15.511501 waagent[1924]: 2024-12-13T13:15:15.511453Z INFO ExtHandler Dec 13 13:15:15.511577 waagent[1924]: 2024-12-13T13:15:15.511546Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 5b326098-4864-4c55-a651-f985dc34594c eTag: 3783203114245227424 source: Fabric] Dec 13 13:15:15.511879 waagent[1924]: 2024-12-13T13:15:15.511837Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Dec 13 13:15:15.512481 waagent[1924]: 2024-12-13T13:15:15.512431Z INFO ExtHandler Dec 13 13:15:15.512548 waagent[1924]: 2024-12-13T13:15:15.512517Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Dec 13 13:15:15.517675 waagent[1924]: 2024-12-13T13:15:15.517637Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Dec 13 13:15:15.606159 waagent[1924]: 2024-12-13T13:15:15.605408Z INFO ExtHandler Downloaded certificate {'thumbprint': 'EE38BEDBA8E99CC9292312CDE963D9B4AFF20E99', 'hasPrivateKey': True} Dec 13 13:15:15.606159 waagent[1924]: 2024-12-13T13:15:15.605904Z INFO ExtHandler Downloaded certificate {'thumbprint': '6CC767F6718702FF7049536CA308DA7FCFE1E61D', 'hasPrivateKey': False} Dec 13 13:15:15.606439 waagent[1924]: 2024-12-13T13:15:15.606384Z INFO ExtHandler Fetch goal state completed Dec 13 13:15:15.623839 waagent[1924]: 2024-12-13T13:15:15.623777Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1924 Dec 13 13:15:15.623998 waagent[1924]: 2024-12-13T13:15:15.623961Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Dec 13 13:15:15.625675 waagent[1924]: 2024-12-13T13:15:15.625625Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4186.0.0', '', 'Flatcar Container Linux by Kinvolk'] Dec 13 13:15:15.626074 waagent[1924]: 2024-12-13T13:15:15.626029Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Dec 13 13:15:16.072704 waagent[1924]: 2024-12-13T13:15:16.072338Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Dec 13 13:15:16.072704 waagent[1924]: 2024-12-13T13:15:16.072539Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Dec 13 13:15:16.079495 waagent[1924]: 2024-12-13T13:15:16.079456Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Dec 13 13:15:16.087156 systemd[1]: Reloading requested from client PID 1939 ('systemctl') (unit waagent.service)... Dec 13 13:15:16.087174 systemd[1]: Reloading... Dec 13 13:15:16.169175 zram_generator::config[1976]: No configuration found. Dec 13 13:15:16.275020 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:15:16.355440 systemd[1]: Reloading finished in 267 ms. Dec 13 13:15:16.376741 waagent[1924]: 2024-12-13T13:15:16.376383Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Dec 13 13:15:16.383744 systemd[1]: Reloading requested from client PID 2027 ('systemctl') (unit waagent.service)... Dec 13 13:15:16.383763 systemd[1]: Reloading... Dec 13 13:15:16.455149 zram_generator::config[2059]: No configuration found. Dec 13 13:15:16.564921 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:15:16.645096 systemd[1]: Reloading finished in 261 ms. Dec 13 13:15:16.668161 waagent[1924]: 2024-12-13T13:15:16.667544Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Dec 13 13:15:16.668161 waagent[1924]: 2024-12-13T13:15:16.667725Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Dec 13 13:15:17.154164 waagent[1924]: 2024-12-13T13:15:17.153332Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Dec 13 13:15:17.154260 waagent[1924]: 2024-12-13T13:15:17.154123Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Dec 13 13:15:17.154982 waagent[1924]: 2024-12-13T13:15:17.154894Z INFO ExtHandler ExtHandler Starting env monitor service. Dec 13 13:15:17.155492 waagent[1924]: 2024-12-13T13:15:17.155357Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Dec 13 13:15:17.156155 waagent[1924]: 2024-12-13T13:15:17.155712Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 13:15:17.156155 waagent[1924]: 2024-12-13T13:15:17.155796Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 13:15:17.156155 waagent[1924]: 2024-12-13T13:15:17.155984Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Dec 13 13:15:17.156263 waagent[1924]: 2024-12-13T13:15:17.156185Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Dec 13 13:15:17.156263 waagent[1924]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Dec 13 13:15:17.156263 waagent[1924]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Dec 13 13:15:17.156263 waagent[1924]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Dec 13 13:15:17.156263 waagent[1924]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Dec 13 13:15:17.156263 waagent[1924]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 13 13:15:17.156263 waagent[1924]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 13 13:15:17.156838 waagent[1924]: 2024-12-13T13:15:17.156513Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Dec 13 13:15:17.156838 waagent[1924]: 2024-12-13T13:15:17.156657Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Dec 13 13:15:17.157121 waagent[1924]: 2024-12-13T13:15:17.157059Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Dec 13 13:15:17.157271 waagent[1924]: 2024-12-13T13:15:17.157231Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 13:15:17.157397 waagent[1924]: 2024-12-13T13:15:17.157348Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Dec 13 13:15:17.157526 waagent[1924]: 2024-12-13T13:15:17.157485Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Dec 13 13:15:17.157699 waagent[1924]: 2024-12-13T13:15:17.157646Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 13:15:17.158306 waagent[1924]: 2024-12-13T13:15:17.158227Z INFO EnvHandler ExtHandler Configure routes Dec 13 13:15:17.159408 waagent[1924]: 2024-12-13T13:15:17.159348Z INFO EnvHandler ExtHandler Gateway:None Dec 13 13:15:17.159520 waagent[1924]: 2024-12-13T13:15:17.159455Z INFO EnvHandler ExtHandler Routes:None Dec 13 13:15:17.167018 waagent[1924]: 2024-12-13T13:15:17.166969Z INFO ExtHandler ExtHandler Dec 13 13:15:17.167117 waagent[1924]: 2024-12-13T13:15:17.167076Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: c21b00aa-1ab4-4bea-be01-f4c7f044541c correlation 7715d17e-d38f-4f86-86a0-33d984e5bd4e created: 2024-12-13T13:13:59.306252Z] Dec 13 13:15:17.167539 waagent[1924]: 2024-12-13T13:15:17.167489Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Dec 13 13:15:17.168096 waagent[1924]: 2024-12-13T13:15:17.168058Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Dec 13 13:15:17.199860 waagent[1924]: 2024-12-13T13:15:17.199739Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: DE1B842F-97BF-4798-956A-CBE32D503AB8;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Dec 13 13:15:17.267445 waagent[1924]: 2024-12-13T13:15:17.267359Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Dec 13 13:15:17.267445 waagent[1924]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 13 13:15:17.267445 waagent[1924]: pkts bytes target prot opt in out source destination Dec 13 13:15:17.267445 waagent[1924]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 13 13:15:17.267445 waagent[1924]: pkts bytes target prot opt in out source destination Dec 13 13:15:17.267445 waagent[1924]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Dec 13 13:15:17.267445 waagent[1924]: pkts bytes target prot opt in out source destination Dec 13 13:15:17.267445 waagent[1924]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Dec 13 13:15:17.267445 waagent[1924]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 13 13:15:17.267445 waagent[1924]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 13 13:15:17.270298 waagent[1924]: 2024-12-13T13:15:17.270238Z INFO EnvHandler ExtHandler Current Firewall rules: Dec 13 13:15:17.270298 waagent[1924]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 13 13:15:17.270298 waagent[1924]: pkts bytes target prot opt in out source destination Dec 13 13:15:17.270298 waagent[1924]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 13 13:15:17.270298 waagent[1924]: pkts bytes target prot opt in out source destination Dec 13 13:15:17.270298 waagent[1924]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Dec 13 13:15:17.270298 waagent[1924]: pkts bytes target prot opt in out source destination Dec 13 13:15:17.270298 waagent[1924]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Dec 13 13:15:17.270298 waagent[1924]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 13 13:15:17.270298 waagent[1924]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 13 13:15:17.270533 waagent[1924]: 2024-12-13T13:15:17.270496Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Dec 13 13:15:17.307663 waagent[1924]: 2024-12-13T13:15:17.307586Z INFO MonitorHandler ExtHandler Network interfaces: Dec 13 13:15:17.307663 waagent[1924]: Executing ['ip', '-a', '-o', 'link']: Dec 13 13:15:17.307663 waagent[1924]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Dec 13 13:15:17.307663 waagent[1924]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:f7:ac:ec brd ff:ff:ff:ff:ff:ff Dec 13 13:15:17.307663 waagent[1924]: 3: enP52625s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:f7:ac:ec brd ff:ff:ff:ff:ff:ff\ altname enP52625p0s2 Dec 13 13:15:17.307663 waagent[1924]: Executing ['ip', '-4', '-a', '-o', 'address']: Dec 13 13:15:17.307663 waagent[1924]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Dec 13 13:15:17.307663 waagent[1924]: 2: eth0 inet 10.200.20.34/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Dec 13 13:15:17.307663 waagent[1924]: Executing ['ip', '-6', '-a', '-o', 'address']: Dec 13 13:15:17.307663 waagent[1924]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Dec 13 13:15:17.307663 waagent[1924]: 2: eth0 inet6 fe80::20d:3aff:fef7:acec/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Dec 13 13:15:17.307663 waagent[1924]: 3: enP52625s1 inet6 fe80::20d:3aff:fef7:acec/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Dec 13 13:15:22.098339 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 13:15:22.108328 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:15:22.202069 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:15:22.210405 (kubelet)[2158]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 13:15:22.254215 kubelet[2158]: E1213 13:15:22.254160 2158 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:15:22.257209 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:15:22.257550 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:15:32.348506 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 13:15:32.360318 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:15:32.444435 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:15:32.448721 (kubelet)[2174]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 13:15:32.532952 kubelet[2174]: E1213 13:15:32.532870 2174 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:15:32.534915 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:15:32.535032 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:15:34.074782 chronyd[1684]: Selected source PHC0 Dec 13 13:15:42.598433 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 13 13:15:42.607420 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:15:42.701856 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:15:42.707009 (kubelet)[2190]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 13:15:42.745919 kubelet[2190]: E1213 13:15:42.745823 2190 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:15:42.748265 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:15:42.748407 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:15:52.848392 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 13 13:15:52.857342 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:15:52.962215 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:15:52.966882 (kubelet)[2206]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 13:15:53.019396 kubelet[2206]: E1213 13:15:53.019353 2206 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:15:53.021521 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:15:53.021649 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:15:54.197602 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Dec 13 13:15:55.426966 update_engine[1701]: I20241213 13:15:55.426158 1701 update_attempter.cc:509] Updating boot flags... Dec 13 13:15:55.477162 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (2229) Dec 13 13:16:03.098622 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Dec 13 13:16:03.108342 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:16:03.214960 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:16:03.220429 (kubelet)[2285]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 13:16:03.272724 kubelet[2285]: E1213 13:16:03.272679 2285 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:16:03.274864 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:16:03.274990 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:16:06.352269 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 13:16:06.358401 systemd[1]: Started sshd@0-10.200.20.34:22-10.200.16.10:47594.service - OpenSSH per-connection server daemon (10.200.16.10:47594). Dec 13 13:16:06.873304 sshd[2294]: Accepted publickey for core from 10.200.16.10 port 47594 ssh2: RSA SHA256:s/ry0hNLnvKqnMQ9cPrjUFS9LNOYotk3LUB29ZrhvrI Dec 13 13:16:06.874649 sshd-session[2294]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:16:06.879202 systemd-logind[1699]: New session 3 of user core. Dec 13 13:16:06.885305 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 13:16:07.260896 systemd[1]: Started sshd@1-10.200.20.34:22-10.200.16.10:47606.service - OpenSSH per-connection server daemon (10.200.16.10:47606). Dec 13 13:16:07.695023 sshd[2299]: Accepted publickey for core from 10.200.16.10 port 47606 ssh2: RSA SHA256:s/ry0hNLnvKqnMQ9cPrjUFS9LNOYotk3LUB29ZrhvrI Dec 13 13:16:07.696377 sshd-session[2299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:16:07.700399 systemd-logind[1699]: New session 4 of user core. Dec 13 13:16:07.711307 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 13:16:08.018251 sshd[2301]: Connection closed by 10.200.16.10 port 47606 Dec 13 13:16:08.018766 sshd-session[2299]: pam_unix(sshd:session): session closed for user core Dec 13 13:16:08.021844 systemd-logind[1699]: Session 4 logged out. Waiting for processes to exit. Dec 13 13:16:08.022080 systemd[1]: sshd@1-10.200.20.34:22-10.200.16.10:47606.service: Deactivated successfully. Dec 13 13:16:08.023761 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 13:16:08.025607 systemd-logind[1699]: Removed session 4. Dec 13 13:16:08.099320 systemd[1]: Started sshd@2-10.200.20.34:22-10.200.16.10:47608.service - OpenSSH per-connection server daemon (10.200.16.10:47608). Dec 13 13:16:08.532647 sshd[2306]: Accepted publickey for core from 10.200.16.10 port 47608 ssh2: RSA SHA256:s/ry0hNLnvKqnMQ9cPrjUFS9LNOYotk3LUB29ZrhvrI Dec 13 13:16:08.533952 sshd-session[2306]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:16:08.539834 systemd-logind[1699]: New session 5 of user core. Dec 13 13:16:08.546330 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 13:16:08.851486 sshd[2308]: Connection closed by 10.200.16.10 port 47608 Dec 13 13:16:08.852121 sshd-session[2306]: pam_unix(sshd:session): session closed for user core Dec 13 13:16:08.855690 systemd[1]: sshd@2-10.200.20.34:22-10.200.16.10:47608.service: Deactivated successfully. Dec 13 13:16:08.857294 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 13:16:08.857943 systemd-logind[1699]: Session 5 logged out. Waiting for processes to exit. Dec 13 13:16:08.858982 systemd-logind[1699]: Removed session 5. Dec 13 13:16:08.932796 systemd[1]: Started sshd@3-10.200.20.34:22-10.200.16.10:52714.service - OpenSSH per-connection server daemon (10.200.16.10:52714). Dec 13 13:16:09.367895 sshd[2313]: Accepted publickey for core from 10.200.16.10 port 52714 ssh2: RSA SHA256:s/ry0hNLnvKqnMQ9cPrjUFS9LNOYotk3LUB29ZrhvrI Dec 13 13:16:09.369275 sshd-session[2313]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:16:09.373351 systemd-logind[1699]: New session 6 of user core. Dec 13 13:16:09.381339 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 13:16:09.691382 sshd[2315]: Connection closed by 10.200.16.10 port 52714 Dec 13 13:16:09.691087 sshd-session[2313]: pam_unix(sshd:session): session closed for user core Dec 13 13:16:09.695401 systemd[1]: sshd@3-10.200.20.34:22-10.200.16.10:52714.service: Deactivated successfully. Dec 13 13:16:09.696977 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 13:16:09.698374 systemd-logind[1699]: Session 6 logged out. Waiting for processes to exit. Dec 13 13:16:09.699380 systemd-logind[1699]: Removed session 6. Dec 13 13:16:09.771787 systemd[1]: Started sshd@4-10.200.20.34:22-10.200.16.10:52728.service - OpenSSH per-connection server daemon (10.200.16.10:52728). Dec 13 13:16:10.192400 sshd[2320]: Accepted publickey for core from 10.200.16.10 port 52728 ssh2: RSA SHA256:s/ry0hNLnvKqnMQ9cPrjUFS9LNOYotk3LUB29ZrhvrI Dec 13 13:16:10.193671 sshd-session[2320]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:16:10.197586 systemd-logind[1699]: New session 7 of user core. Dec 13 13:16:10.208303 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 13:16:10.540695 sudo[2323]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 13:16:10.540973 sudo[2323]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 13:16:10.573034 sudo[2323]: pam_unix(sudo:session): session closed for user root Dec 13 13:16:10.655002 sshd[2322]: Connection closed by 10.200.16.10 port 52728 Dec 13 13:16:10.654160 sshd-session[2320]: pam_unix(sshd:session): session closed for user core Dec 13 13:16:10.658427 systemd[1]: sshd@4-10.200.20.34:22-10.200.16.10:52728.service: Deactivated successfully. Dec 13 13:16:10.660279 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 13:16:10.660919 systemd-logind[1699]: Session 7 logged out. Waiting for processes to exit. Dec 13 13:16:10.662011 systemd-logind[1699]: Removed session 7. Dec 13 13:16:10.728950 systemd[1]: Started sshd@5-10.200.20.34:22-10.200.16.10:52740.service - OpenSSH per-connection server daemon (10.200.16.10:52740). Dec 13 13:16:11.147672 sshd[2328]: Accepted publickey for core from 10.200.16.10 port 52740 ssh2: RSA SHA256:s/ry0hNLnvKqnMQ9cPrjUFS9LNOYotk3LUB29ZrhvrI Dec 13 13:16:11.148990 sshd-session[2328]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:16:11.152805 systemd-logind[1699]: New session 8 of user core. Dec 13 13:16:11.164279 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 13:16:11.385148 sudo[2332]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 13:16:11.385443 sudo[2332]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 13:16:11.388834 sudo[2332]: pam_unix(sudo:session): session closed for user root Dec 13 13:16:11.393958 sudo[2331]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 13 13:16:11.394255 sudo[2331]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 13:16:11.409459 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 13 13:16:11.433804 augenrules[2354]: No rules Dec 13 13:16:11.435044 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 13:16:11.435430 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 13 13:16:11.436999 sudo[2331]: pam_unix(sudo:session): session closed for user root Dec 13 13:16:11.518174 sshd[2330]: Connection closed by 10.200.16.10 port 52740 Dec 13 13:16:11.518797 sshd-session[2328]: pam_unix(sshd:session): session closed for user core Dec 13 13:16:11.521488 systemd[1]: sshd@5-10.200.20.34:22-10.200.16.10:52740.service: Deactivated successfully. Dec 13 13:16:11.523192 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 13:16:11.524725 systemd-logind[1699]: Session 8 logged out. Waiting for processes to exit. Dec 13 13:16:11.525711 systemd-logind[1699]: Removed session 8. Dec 13 13:16:11.596467 systemd[1]: Started sshd@6-10.200.20.34:22-10.200.16.10:52754.service - OpenSSH per-connection server daemon (10.200.16.10:52754). Dec 13 13:16:12.040435 sshd[2362]: Accepted publickey for core from 10.200.16.10 port 52754 ssh2: RSA SHA256:s/ry0hNLnvKqnMQ9cPrjUFS9LNOYotk3LUB29ZrhvrI Dec 13 13:16:12.041731 sshd-session[2362]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:16:12.046565 systemd-logind[1699]: New session 9 of user core. Dec 13 13:16:12.052359 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 13:16:12.284994 sudo[2365]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 13:16:12.285314 sudo[2365]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 13:16:13.348265 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Dec 13 13:16:13.356342 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:16:13.477500 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:16:13.482036 (kubelet)[2391]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 13:16:13.523197 kubelet[2391]: E1213 13:16:13.523120 2391 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:16:13.525554 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:16:13.525893 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:16:14.130599 (dockerd)[2400]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 13:16:14.130961 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 13:16:14.796006 dockerd[2400]: time="2024-12-13T13:16:14.795942457Z" level=info msg="Starting up" Dec 13 13:16:15.134923 dockerd[2400]: time="2024-12-13T13:16:15.134867750Z" level=info msg="Loading containers: start." Dec 13 13:16:15.338151 kernel: Initializing XFRM netlink socket Dec 13 13:16:15.421212 systemd-networkd[1451]: docker0: Link UP Dec 13 13:16:15.453461 dockerd[2400]: time="2024-12-13T13:16:15.453411701Z" level=info msg="Loading containers: done." Dec 13 13:16:15.474916 dockerd[2400]: time="2024-12-13T13:16:15.474856405Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 13:16:15.475074 dockerd[2400]: time="2024-12-13T13:16:15.474972845Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Dec 13 13:16:15.475137 dockerd[2400]: time="2024-12-13T13:16:15.475101325Z" level=info msg="Daemon has completed initialization" Dec 13 13:16:15.527053 dockerd[2400]: time="2024-12-13T13:16:15.526684582Z" level=info msg="API listen on /run/docker.sock" Dec 13 13:16:15.526916 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 13:16:17.305920 containerd[1734]: time="2024-12-13T13:16:17.305872183Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\"" Dec 13 13:16:18.182961 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3216422378.mount: Deactivated successfully. Dec 13 13:16:19.745196 containerd[1734]: time="2024-12-13T13:16:19.744339191Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:16:19.746482 containerd[1734]: time="2024-12-13T13:16:19.746249993Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.8: active requests=0, bytes read=29864010" Dec 13 13:16:19.748907 containerd[1734]: time="2024-12-13T13:16:19.748868076Z" level=info msg="ImageCreate event name:\"sha256:8202e87ffef091fe4f11dd113ff6f2ab16c70279775d224ddd8aa95e2dd0b966\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:16:19.753231 containerd[1734]: time="2024-12-13T13:16:19.753165961Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:16:19.754275 containerd[1734]: time="2024-12-13T13:16:19.754236402Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.8\" with image id \"sha256:8202e87ffef091fe4f11dd113ff6f2ab16c70279775d224ddd8aa95e2dd0b966\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\", size \"29860810\" in 2.448317099s" Dec 13 13:16:19.754434 containerd[1734]: time="2024-12-13T13:16:19.754281242Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\" returns image reference \"sha256:8202e87ffef091fe4f11dd113ff6f2ab16c70279775d224ddd8aa95e2dd0b966\"" Dec 13 13:16:19.775858 containerd[1734]: time="2024-12-13T13:16:19.775812026Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\"" Dec 13 13:16:21.793278 containerd[1734]: time="2024-12-13T13:16:21.792291889Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:16:21.795804 containerd[1734]: time="2024-12-13T13:16:21.795757973Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.8: active requests=0, bytes read=26900694" Dec 13 13:16:21.798375 containerd[1734]: time="2024-12-13T13:16:21.798342495Z" level=info msg="ImageCreate event name:\"sha256:4b2191aa4d4d6ca9fbd7704b35401bfa6b0b90de75db22c425053e97fd5c8338\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:16:21.804339 containerd[1734]: time="2024-12-13T13:16:21.804280822Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:16:21.805518 containerd[1734]: time="2024-12-13T13:16:21.805486943Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.8\" with image id \"sha256:4b2191aa4d4d6ca9fbd7704b35401bfa6b0b90de75db22c425053e97fd5c8338\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\", size \"28303015\" in 2.029628637s" Dec 13 13:16:21.805646 containerd[1734]: time="2024-12-13T13:16:21.805631703Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\" returns image reference \"sha256:4b2191aa4d4d6ca9fbd7704b35401bfa6b0b90de75db22c425053e97fd5c8338\"" Dec 13 13:16:21.830443 containerd[1734]: time="2024-12-13T13:16:21.830398211Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\"" Dec 13 13:16:23.142935 containerd[1734]: time="2024-12-13T13:16:23.142873984Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:16:23.145205 containerd[1734]: time="2024-12-13T13:16:23.145153426Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.8: active requests=0, bytes read=16164332" Dec 13 13:16:23.150495 containerd[1734]: time="2024-12-13T13:16:23.150430912Z" level=info msg="ImageCreate event name:\"sha256:d43326c1723208785a33cdc1507082792eb041ca0d789c103c90180e31f65ca8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:16:23.155838 containerd[1734]: time="2024-12-13T13:16:23.155751278Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:16:23.157208 containerd[1734]: time="2024-12-13T13:16:23.157040359Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.8\" with image id \"sha256:d43326c1723208785a33cdc1507082792eb041ca0d789c103c90180e31f65ca8\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\", size \"17566671\" in 1.326383988s" Dec 13 13:16:23.157208 containerd[1734]: time="2024-12-13T13:16:23.157082679Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\" returns image reference \"sha256:d43326c1723208785a33cdc1507082792eb041ca0d789c103c90180e31f65ca8\"" Dec 13 13:16:23.180395 containerd[1734]: time="2024-12-13T13:16:23.180363304Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Dec 13 13:16:23.598228 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Dec 13 13:16:23.603341 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:16:23.701183 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:16:23.719464 (kubelet)[2675]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 13:16:23.758651 kubelet[2675]: E1213 13:16:23.758606 2675 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:16:23.761493 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:16:23.761645 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:16:24.827830 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount106763506.mount: Deactivated successfully. Dec 13 13:16:26.310140 containerd[1734]: time="2024-12-13T13:16:26.310072808Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:16:26.312202 containerd[1734]: time="2024-12-13T13:16:26.312141371Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.8: active requests=0, bytes read=25662011" Dec 13 13:16:26.315497 containerd[1734]: time="2024-12-13T13:16:26.315429854Z" level=info msg="ImageCreate event name:\"sha256:4612aebc0675831aedbbde7cd56b85db91f1fdcf05ef923072961538ec497adb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:16:26.319006 containerd[1734]: time="2024-12-13T13:16:26.318938018Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:16:26.319970 containerd[1734]: time="2024-12-13T13:16:26.319585779Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.8\" with image id \"sha256:4612aebc0675831aedbbde7cd56b85db91f1fdcf05ef923072961538ec497adb\", repo tag \"registry.k8s.io/kube-proxy:v1.30.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\", size \"25661030\" in 3.139024555s" Dec 13 13:16:26.319970 containerd[1734]: time="2024-12-13T13:16:26.319624499Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:4612aebc0675831aedbbde7cd56b85db91f1fdcf05ef923072961538ec497adb\"" Dec 13 13:16:26.341265 containerd[1734]: time="2024-12-13T13:16:26.341231762Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 13:16:27.005028 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3059030077.mount: Deactivated successfully. Dec 13 13:16:27.936995 containerd[1734]: time="2024-12-13T13:16:27.936934267Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:16:27.939654 containerd[1734]: time="2024-12-13T13:16:27.939408349Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Dec 13 13:16:27.943951 containerd[1734]: time="2024-12-13T13:16:27.943912914Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:16:27.949234 containerd[1734]: time="2024-12-13T13:16:27.949152480Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:16:27.950653 containerd[1734]: time="2024-12-13T13:16:27.950188161Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.608765679s" Dec 13 13:16:27.950653 containerd[1734]: time="2024-12-13T13:16:27.950229081Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Dec 13 13:16:27.970186 containerd[1734]: time="2024-12-13T13:16:27.970148902Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 13:16:28.526539 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount680333070.mount: Deactivated successfully. Dec 13 13:16:28.547304 containerd[1734]: time="2024-12-13T13:16:28.547252319Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:16:28.549642 containerd[1734]: time="2024-12-13T13:16:28.549588481Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Dec 13 13:16:28.552824 containerd[1734]: time="2024-12-13T13:16:28.552786925Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:16:28.558283 containerd[1734]: time="2024-12-13T13:16:28.558219731Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:16:28.559157 containerd[1734]: time="2024-12-13T13:16:28.558959931Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 588.542628ms" Dec 13 13:16:28.559157 containerd[1734]: time="2024-12-13T13:16:28.558996292Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Dec 13 13:16:28.581873 containerd[1734]: time="2024-12-13T13:16:28.581822396Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Dec 13 13:16:29.330772 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4048131676.mount: Deactivated successfully. Dec 13 13:16:32.486806 containerd[1734]: time="2024-12-13T13:16:32.486752425Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:16:32.488847 containerd[1734]: time="2024-12-13T13:16:32.488797547Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191472" Dec 13 13:16:32.492113 containerd[1734]: time="2024-12-13T13:16:32.492078311Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:16:32.497735 containerd[1734]: time="2024-12-13T13:16:32.497689237Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:16:32.498810 containerd[1734]: time="2024-12-13T13:16:32.498764678Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 3.916901602s" Dec 13 13:16:32.498810 containerd[1734]: time="2024-12-13T13:16:32.498804198Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Dec 13 13:16:33.848875 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Dec 13 13:16:33.858681 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:16:34.079354 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:16:34.084627 (kubelet)[2866]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 13:16:34.131875 kubelet[2866]: E1213 13:16:34.131747 2866 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:16:34.135329 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:16:34.135458 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:16:38.089658 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:16:38.098386 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:16:38.121407 systemd[1]: Reloading requested from client PID 2880 ('systemctl') (unit session-9.scope)... Dec 13 13:16:38.121574 systemd[1]: Reloading... Dec 13 13:16:38.221711 zram_generator::config[2920]: No configuration found. Dec 13 13:16:38.337640 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:16:38.419283 systemd[1]: Reloading finished in 297 ms. Dec 13 13:16:38.469652 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 13:16:38.469889 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 13:16:38.470307 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:16:38.476638 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:16:38.595753 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:16:38.603600 (kubelet)[2989]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 13:16:38.643406 kubelet[2989]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 13:16:38.643736 kubelet[2989]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 13:16:38.643779 kubelet[2989]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 13:16:38.643905 kubelet[2989]: I1213 13:16:38.643875 2989 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 13:16:39.145155 kubelet[2989]: I1213 13:16:39.143157 2989 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 13:16:39.145155 kubelet[2989]: I1213 13:16:39.143191 2989 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 13:16:39.145155 kubelet[2989]: I1213 13:16:39.143428 2989 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 13:16:39.156265 kubelet[2989]: I1213 13:16:39.156228 2989 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 13:16:39.157740 kubelet[2989]: E1213 13:16:39.157701 2989 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.34:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.34:6443: connect: connection refused Dec 13 13:16:39.169253 kubelet[2989]: I1213 13:16:39.169185 2989 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 13:16:39.169523 kubelet[2989]: I1213 13:16:39.169480 2989 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 13:16:39.169731 kubelet[2989]: I1213 13:16:39.169519 2989 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4186.0.0-a-128d80e197","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 13:16:39.169815 kubelet[2989]: I1213 13:16:39.169736 2989 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 13:16:39.169815 kubelet[2989]: I1213 13:16:39.169744 2989 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 13:16:39.169924 kubelet[2989]: I1213 13:16:39.169903 2989 state_mem.go:36] "Initialized new in-memory state store" Dec 13 13:16:39.170770 kubelet[2989]: I1213 13:16:39.170745 2989 kubelet.go:400] "Attempting to sync node with API server" Dec 13 13:16:39.170813 kubelet[2989]: I1213 13:16:39.170772 2989 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 13:16:39.170813 kubelet[2989]: I1213 13:16:39.170810 2989 kubelet.go:312] "Adding apiserver pod source" Dec 13 13:16:39.170857 kubelet[2989]: I1213 13:16:39.170825 2989 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 13:16:39.171953 kubelet[2989]: W1213 13:16:39.171754 2989 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.34:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Dec 13 13:16:39.171953 kubelet[2989]: E1213 13:16:39.171805 2989 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.34:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Dec 13 13:16:39.173195 kubelet[2989]: W1213 13:16:39.172764 2989 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.0.0-a-128d80e197&limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Dec 13 13:16:39.173195 kubelet[2989]: E1213 13:16:39.172828 2989 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.0.0-a-128d80e197&limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Dec 13 13:16:39.173356 kubelet[2989]: I1213 13:16:39.173336 2989 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Dec 13 13:16:39.173809 kubelet[2989]: I1213 13:16:39.173784 2989 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 13:16:39.175424 kubelet[2989]: W1213 13:16:39.175390 2989 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 13:16:39.176058 kubelet[2989]: I1213 13:16:39.176033 2989 server.go:1264] "Started kubelet" Dec 13 13:16:39.179767 kubelet[2989]: I1213 13:16:39.179729 2989 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 13:16:39.183342 kubelet[2989]: I1213 13:16:39.183282 2989 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 13:16:39.184379 kubelet[2989]: I1213 13:16:39.184343 2989 server.go:455] "Adding debug handlers to kubelet server" Dec 13 13:16:39.185389 kubelet[2989]: I1213 13:16:39.185325 2989 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 13:16:39.185589 kubelet[2989]: I1213 13:16:39.185568 2989 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 13:16:39.186198 kubelet[2989]: I1213 13:16:39.186178 2989 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 13:16:39.187664 kubelet[2989]: I1213 13:16:39.187638 2989 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 13:16:39.187824 kubelet[2989]: I1213 13:16:39.187813 2989 reconciler.go:26] "Reconciler: start to sync state" Dec 13 13:16:39.188742 kubelet[2989]: E1213 13:16:39.188694 2989 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.0.0-a-128d80e197?timeout=10s\": dial tcp 10.200.20.34:6443: connect: connection refused" interval="200ms" Dec 13 13:16:39.189506 kubelet[2989]: W1213 13:16:39.188896 2989 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Dec 13 13:16:39.189506 kubelet[2989]: E1213 13:16:39.188953 2989 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Dec 13 13:16:39.189506 kubelet[2989]: E1213 13:16:39.189006 2989 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.34:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.34:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4186.0.0-a-128d80e197.1810bef11461fe61 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4186.0.0-a-128d80e197,UID:ci-4186.0.0-a-128d80e197,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4186.0.0-a-128d80e197,},FirstTimestamp:2024-12-13 13:16:39.176003169 +0000 UTC m=+0.568722483,LastTimestamp:2024-12-13 13:16:39.176003169 +0000 UTC m=+0.568722483,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4186.0.0-a-128d80e197,}" Dec 13 13:16:39.190556 kubelet[2989]: I1213 13:16:39.190535 2989 factory.go:221] Registration of the systemd container factory successfully Dec 13 13:16:39.190917 kubelet[2989]: I1213 13:16:39.190896 2989 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 13:16:39.191200 kubelet[2989]: E1213 13:16:39.190946 2989 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 13:16:39.193344 kubelet[2989]: I1213 13:16:39.193311 2989 factory.go:221] Registration of the containerd container factory successfully Dec 13 13:16:39.209287 kubelet[2989]: I1213 13:16:39.209027 2989 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 13:16:39.211622 kubelet[2989]: I1213 13:16:39.211493 2989 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 13:16:39.211622 kubelet[2989]: I1213 13:16:39.211542 2989 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 13:16:39.211622 kubelet[2989]: I1213 13:16:39.211562 2989 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 13:16:39.211622 kubelet[2989]: E1213 13:16:39.211610 2989 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 13:16:39.214048 kubelet[2989]: W1213 13:16:39.213443 2989 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Dec 13 13:16:39.214048 kubelet[2989]: E1213 13:16:39.213486 2989 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Dec 13 13:16:39.216918 kubelet[2989]: I1213 13:16:39.216892 2989 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 13:16:39.216918 kubelet[2989]: I1213 13:16:39.216910 2989 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 13:16:39.217057 kubelet[2989]: I1213 13:16:39.216932 2989 state_mem.go:36] "Initialized new in-memory state store" Dec 13 13:16:39.223571 kubelet[2989]: I1213 13:16:39.223540 2989 policy_none.go:49] "None policy: Start" Dec 13 13:16:39.224371 kubelet[2989]: I1213 13:16:39.224341 2989 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 13:16:39.224371 kubelet[2989]: I1213 13:16:39.224374 2989 state_mem.go:35] "Initializing new in-memory state store" Dec 13 13:16:39.233070 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 13:16:39.241230 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 13:16:39.244702 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 13:16:39.252081 kubelet[2989]: I1213 13:16:39.252053 2989 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 13:16:39.252886 kubelet[2989]: I1213 13:16:39.252789 2989 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 13:16:39.253053 kubelet[2989]: I1213 13:16:39.253017 2989 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 13:16:39.255806 kubelet[2989]: E1213 13:16:39.255756 2989 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4186.0.0-a-128d80e197\" not found" Dec 13 13:16:39.290321 kubelet[2989]: I1213 13:16:39.290297 2989 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186.0.0-a-128d80e197" Dec 13 13:16:39.290836 kubelet[2989]: E1213 13:16:39.290778 2989 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.34:6443/api/v1/nodes\": dial tcp 10.200.20.34:6443: connect: connection refused" node="ci-4186.0.0-a-128d80e197" Dec 13 13:16:39.312434 kubelet[2989]: I1213 13:16:39.312121 2989 topology_manager.go:215] "Topology Admit Handler" podUID="4c3be0cdb34a6fc69fdcb17e2a61aaec" podNamespace="kube-system" podName="kube-apiserver-ci-4186.0.0-a-128d80e197" Dec 13 13:16:39.314196 kubelet[2989]: I1213 13:16:39.313942 2989 topology_manager.go:215] "Topology Admit Handler" podUID="b418c78d5d8db69dcf91fe3a46823890" podNamespace="kube-system" podName="kube-controller-manager-ci-4186.0.0-a-128d80e197" Dec 13 13:16:39.316018 kubelet[2989]: I1213 13:16:39.315786 2989 topology_manager.go:215] "Topology Admit Handler" podUID="3c742519b8f8c3761a62aa0c322fc800" podNamespace="kube-system" podName="kube-scheduler-ci-4186.0.0-a-128d80e197" Dec 13 13:16:39.323007 systemd[1]: Created slice kubepods-burstable-pod4c3be0cdb34a6fc69fdcb17e2a61aaec.slice - libcontainer container kubepods-burstable-pod4c3be0cdb34a6fc69fdcb17e2a61aaec.slice. Dec 13 13:16:39.345878 systemd[1]: Created slice kubepods-burstable-pod3c742519b8f8c3761a62aa0c322fc800.slice - libcontainer container kubepods-burstable-pod3c742519b8f8c3761a62aa0c322fc800.slice. Dec 13 13:16:39.350089 systemd[1]: Created slice kubepods-burstable-podb418c78d5d8db69dcf91fe3a46823890.slice - libcontainer container kubepods-burstable-podb418c78d5d8db69dcf91fe3a46823890.slice. Dec 13 13:16:39.388373 kubelet[2989]: I1213 13:16:39.388249 2989 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4c3be0cdb34a6fc69fdcb17e2a61aaec-k8s-certs\") pod \"kube-apiserver-ci-4186.0.0-a-128d80e197\" (UID: \"4c3be0cdb34a6fc69fdcb17e2a61aaec\") " pod="kube-system/kube-apiserver-ci-4186.0.0-a-128d80e197" Dec 13 13:16:39.388373 kubelet[2989]: I1213 13:16:39.388283 2989 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b418c78d5d8db69dcf91fe3a46823890-ca-certs\") pod \"kube-controller-manager-ci-4186.0.0-a-128d80e197\" (UID: \"b418c78d5d8db69dcf91fe3a46823890\") " pod="kube-system/kube-controller-manager-ci-4186.0.0-a-128d80e197" Dec 13 13:16:39.388373 kubelet[2989]: I1213 13:16:39.388305 2989 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b418c78d5d8db69dcf91fe3a46823890-flexvolume-dir\") pod \"kube-controller-manager-ci-4186.0.0-a-128d80e197\" (UID: \"b418c78d5d8db69dcf91fe3a46823890\") " pod="kube-system/kube-controller-manager-ci-4186.0.0-a-128d80e197" Dec 13 13:16:39.388373 kubelet[2989]: I1213 13:16:39.388320 2989 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b418c78d5d8db69dcf91fe3a46823890-k8s-certs\") pod \"kube-controller-manager-ci-4186.0.0-a-128d80e197\" (UID: \"b418c78d5d8db69dcf91fe3a46823890\") " pod="kube-system/kube-controller-manager-ci-4186.0.0-a-128d80e197" Dec 13 13:16:39.388373 kubelet[2989]: I1213 13:16:39.388335 2989 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b418c78d5d8db69dcf91fe3a46823890-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4186.0.0-a-128d80e197\" (UID: \"b418c78d5d8db69dcf91fe3a46823890\") " pod="kube-system/kube-controller-manager-ci-4186.0.0-a-128d80e197" Dec 13 13:16:39.388650 kubelet[2989]: I1213 13:16:39.388350 2989 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4c3be0cdb34a6fc69fdcb17e2a61aaec-ca-certs\") pod \"kube-apiserver-ci-4186.0.0-a-128d80e197\" (UID: \"4c3be0cdb34a6fc69fdcb17e2a61aaec\") " pod="kube-system/kube-apiserver-ci-4186.0.0-a-128d80e197" Dec 13 13:16:39.388650 kubelet[2989]: I1213 13:16:39.388365 2989 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4c3be0cdb34a6fc69fdcb17e2a61aaec-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4186.0.0-a-128d80e197\" (UID: \"4c3be0cdb34a6fc69fdcb17e2a61aaec\") " pod="kube-system/kube-apiserver-ci-4186.0.0-a-128d80e197" Dec 13 13:16:39.388650 kubelet[2989]: I1213 13:16:39.388380 2989 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b418c78d5d8db69dcf91fe3a46823890-kubeconfig\") pod \"kube-controller-manager-ci-4186.0.0-a-128d80e197\" (UID: \"b418c78d5d8db69dcf91fe3a46823890\") " pod="kube-system/kube-controller-manager-ci-4186.0.0-a-128d80e197" Dec 13 13:16:39.388650 kubelet[2989]: I1213 13:16:39.388394 2989 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3c742519b8f8c3761a62aa0c322fc800-kubeconfig\") pod \"kube-scheduler-ci-4186.0.0-a-128d80e197\" (UID: \"3c742519b8f8c3761a62aa0c322fc800\") " pod="kube-system/kube-scheduler-ci-4186.0.0-a-128d80e197" Dec 13 13:16:39.389344 kubelet[2989]: E1213 13:16:39.389309 2989 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.0.0-a-128d80e197?timeout=10s\": dial tcp 10.200.20.34:6443: connect: connection refused" interval="400ms" Dec 13 13:16:39.492821 kubelet[2989]: I1213 13:16:39.492521 2989 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186.0.0-a-128d80e197" Dec 13 13:16:39.492909 kubelet[2989]: E1213 13:16:39.492854 2989 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.34:6443/api/v1/nodes\": dial tcp 10.200.20.34:6443: connect: connection refused" node="ci-4186.0.0-a-128d80e197" Dec 13 13:16:39.643706 containerd[1734]: time="2024-12-13T13:16:39.643653842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4186.0.0-a-128d80e197,Uid:4c3be0cdb34a6fc69fdcb17e2a61aaec,Namespace:kube-system,Attempt:0,}" Dec 13 13:16:39.649753 containerd[1734]: time="2024-12-13T13:16:39.649718810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4186.0.0-a-128d80e197,Uid:3c742519b8f8c3761a62aa0c322fc800,Namespace:kube-system,Attempt:0,}" Dec 13 13:16:39.653030 containerd[1734]: time="2024-12-13T13:16:39.652798254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4186.0.0-a-128d80e197,Uid:b418c78d5d8db69dcf91fe3a46823890,Namespace:kube-system,Attempt:0,}" Dec 13 13:16:39.790735 kubelet[2989]: E1213 13:16:39.790618 2989 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.0.0-a-128d80e197?timeout=10s\": dial tcp 10.200.20.34:6443: connect: connection refused" interval="800ms" Dec 13 13:16:39.895292 kubelet[2989]: I1213 13:16:39.895194 2989 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186.0.0-a-128d80e197" Dec 13 13:16:39.895678 kubelet[2989]: E1213 13:16:39.895608 2989 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.34:6443/api/v1/nodes\": dial tcp 10.200.20.34:6443: connect: connection refused" node="ci-4186.0.0-a-128d80e197" Dec 13 13:16:40.181347 kubelet[2989]: W1213 13:16:40.181257 2989 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.0.0-a-128d80e197&limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Dec 13 13:16:40.181347 kubelet[2989]: E1213 13:16:40.181320 2989 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.0.0-a-128d80e197&limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Dec 13 13:16:40.247997 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4054243691.mount: Deactivated successfully. Dec 13 13:16:40.281907 containerd[1734]: time="2024-12-13T13:16:40.281854212Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:16:40.296162 containerd[1734]: time="2024-12-13T13:16:40.296076350Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Dec 13 13:16:40.301892 containerd[1734]: time="2024-12-13T13:16:40.301847997Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:16:40.308164 containerd[1734]: time="2024-12-13T13:16:40.307673245Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:16:40.317276 containerd[1734]: time="2024-12-13T13:16:40.317079257Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 13:16:40.321775 containerd[1734]: time="2024-12-13T13:16:40.321045462Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:16:40.328337 containerd[1734]: time="2024-12-13T13:16:40.328286071Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:16:40.329068 kubelet[2989]: W1213 13:16:40.329006 2989 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.34:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Dec 13 13:16:40.329120 kubelet[2989]: E1213 13:16:40.329075 2989 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.34:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Dec 13 13:16:40.329823 containerd[1734]: time="2024-12-13T13:16:40.329780473Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 686.038831ms" Dec 13 13:16:40.333344 containerd[1734]: time="2024-12-13T13:16:40.333284757Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 13:16:40.337934 containerd[1734]: time="2024-12-13T13:16:40.337746563Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 687.949793ms" Dec 13 13:16:40.366150 containerd[1734]: time="2024-12-13T13:16:40.366091759Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 713.223825ms" Dec 13 13:16:40.443005 kubelet[2989]: W1213 13:16:40.442837 2989 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Dec 13 13:16:40.443005 kubelet[2989]: E1213 13:16:40.442900 2989 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Dec 13 13:16:40.592036 kubelet[2989]: E1213 13:16:40.591979 2989 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.0.0-a-128d80e197?timeout=10s\": dial tcp 10.200.20.34:6443: connect: connection refused" interval="1.6s" Dec 13 13:16:40.697601 kubelet[2989]: I1213 13:16:40.697497 2989 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186.0.0-a-128d80e197" Dec 13 13:16:40.697859 kubelet[2989]: E1213 13:16:40.697828 2989 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.34:6443/api/v1/nodes\": dial tcp 10.200.20.34:6443: connect: connection refused" node="ci-4186.0.0-a-128d80e197" Dec 13 13:16:40.792617 kubelet[2989]: W1213 13:16:40.792576 2989 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Dec 13 13:16:40.792617 kubelet[2989]: E1213 13:16:40.792623 2989 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Dec 13 13:16:41.185310 kubelet[2989]: E1213 13:16:41.185241 2989 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.34:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.34:6443: connect: connection refused Dec 13 13:16:41.279617 containerd[1734]: time="2024-12-13T13:16:41.277857076Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:16:41.279617 containerd[1734]: time="2024-12-13T13:16:41.277917196Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:16:41.279617 containerd[1734]: time="2024-12-13T13:16:41.277934236Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:16:41.279617 containerd[1734]: time="2024-12-13T13:16:41.278005116Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:16:41.284215 containerd[1734]: time="2024-12-13T13:16:41.283179043Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:16:41.284215 containerd[1734]: time="2024-12-13T13:16:41.283248603Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:16:41.284215 containerd[1734]: time="2024-12-13T13:16:41.283261763Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:16:41.284215 containerd[1734]: time="2024-12-13T13:16:41.283343363Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:16:41.296777 containerd[1734]: time="2024-12-13T13:16:41.296597420Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:16:41.297422 containerd[1734]: time="2024-12-13T13:16:41.297068500Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:16:41.297835 containerd[1734]: time="2024-12-13T13:16:41.297721501Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:16:41.298338 containerd[1734]: time="2024-12-13T13:16:41.298293742Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:16:41.311682 systemd[1]: run-containerd-runc-k8s.io-120d43ae8fb76fec4431bdd3f2f053166440a4a5e0700d53872de50c27f8adee-runc.w7IoUg.mount: Deactivated successfully. Dec 13 13:16:41.326410 systemd[1]: Started cri-containerd-120d43ae8fb76fec4431bdd3f2f053166440a4a5e0700d53872de50c27f8adee.scope - libcontainer container 120d43ae8fb76fec4431bdd3f2f053166440a4a5e0700d53872de50c27f8adee. Dec 13 13:16:41.336345 systemd[1]: Started cri-containerd-df0d7831d09e8928622c87c5aaab855b9e5833eecaba7ffb3d0d196a6a78e116.scope - libcontainer container df0d7831d09e8928622c87c5aaab855b9e5833eecaba7ffb3d0d196a6a78e116. Dec 13 13:16:41.340371 systemd[1]: Started cri-containerd-08ff0c458ba36acd58b86fb941603a689bad6bc9160671bf4752a239799e8024.scope - libcontainer container 08ff0c458ba36acd58b86fb941603a689bad6bc9160671bf4752a239799e8024. Dec 13 13:16:41.390407 containerd[1734]: time="2024-12-13T13:16:41.389825058Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4186.0.0-a-128d80e197,Uid:b418c78d5d8db69dcf91fe3a46823890,Namespace:kube-system,Attempt:0,} returns sandbox id \"120d43ae8fb76fec4431bdd3f2f053166440a4a5e0700d53872de50c27f8adee\"" Dec 13 13:16:41.395646 containerd[1734]: time="2024-12-13T13:16:41.395531025Z" level=info msg="CreateContainer within sandbox \"120d43ae8fb76fec4431bdd3f2f053166440a4a5e0700d53872de50c27f8adee\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 13:16:41.398743 containerd[1734]: time="2024-12-13T13:16:41.398697829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4186.0.0-a-128d80e197,Uid:3c742519b8f8c3761a62aa0c322fc800,Namespace:kube-system,Attempt:0,} returns sandbox id \"df0d7831d09e8928622c87c5aaab855b9e5833eecaba7ffb3d0d196a6a78e116\"" Dec 13 13:16:41.404086 containerd[1734]: time="2024-12-13T13:16:41.403952956Z" level=info msg="CreateContainer within sandbox \"df0d7831d09e8928622c87c5aaab855b9e5833eecaba7ffb3d0d196a6a78e116\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 13:16:41.404563 containerd[1734]: time="2024-12-13T13:16:41.404539797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4186.0.0-a-128d80e197,Uid:4c3be0cdb34a6fc69fdcb17e2a61aaec,Namespace:kube-system,Attempt:0,} returns sandbox id \"08ff0c458ba36acd58b86fb941603a689bad6bc9160671bf4752a239799e8024\"" Dec 13 13:16:41.409307 containerd[1734]: time="2024-12-13T13:16:41.409189682Z" level=info msg="CreateContainer within sandbox \"08ff0c458ba36acd58b86fb941603a689bad6bc9160671bf4752a239799e8024\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 13:16:41.455985 containerd[1734]: time="2024-12-13T13:16:41.455865462Z" level=info msg="CreateContainer within sandbox \"df0d7831d09e8928622c87c5aaab855b9e5833eecaba7ffb3d0d196a6a78e116\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"11fb4aa294e416af0d7b1342cd16818292a6e51c17e1a6d6cc2132065f471c7c\"" Dec 13 13:16:41.457376 containerd[1734]: time="2024-12-13T13:16:41.457332664Z" level=info msg="StartContainer for \"11fb4aa294e416af0d7b1342cd16818292a6e51c17e1a6d6cc2132065f471c7c\"" Dec 13 13:16:41.464228 containerd[1734]: time="2024-12-13T13:16:41.464098032Z" level=info msg="CreateContainer within sandbox \"120d43ae8fb76fec4431bdd3f2f053166440a4a5e0700d53872de50c27f8adee\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8b894a26c2f23e98085a57e605969e7ed14cceb4f96ad2f95c812898e9c32387\"" Dec 13 13:16:41.465993 containerd[1734]: time="2024-12-13T13:16:41.465935874Z" level=info msg="StartContainer for \"8b894a26c2f23e98085a57e605969e7ed14cceb4f96ad2f95c812898e9c32387\"" Dec 13 13:16:41.472550 containerd[1734]: time="2024-12-13T13:16:41.472408643Z" level=info msg="CreateContainer within sandbox \"08ff0c458ba36acd58b86fb941603a689bad6bc9160671bf4752a239799e8024\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1e5efbc6315a688f77aaab5e3dfc68f511e957d03e2935f16c4ec81f6907a032\"" Dec 13 13:16:41.472974 containerd[1734]: time="2024-12-13T13:16:41.472952643Z" level=info msg="StartContainer for \"1e5efbc6315a688f77aaab5e3dfc68f511e957d03e2935f16c4ec81f6907a032\"" Dec 13 13:16:41.495483 systemd[1]: Started cri-containerd-11fb4aa294e416af0d7b1342cd16818292a6e51c17e1a6d6cc2132065f471c7c.scope - libcontainer container 11fb4aa294e416af0d7b1342cd16818292a6e51c17e1a6d6cc2132065f471c7c. Dec 13 13:16:41.503992 systemd[1]: Started cri-containerd-8b894a26c2f23e98085a57e605969e7ed14cceb4f96ad2f95c812898e9c32387.scope - libcontainer container 8b894a26c2f23e98085a57e605969e7ed14cceb4f96ad2f95c812898e9c32387. Dec 13 13:16:41.528346 systemd[1]: Started cri-containerd-1e5efbc6315a688f77aaab5e3dfc68f511e957d03e2935f16c4ec81f6907a032.scope - libcontainer container 1e5efbc6315a688f77aaab5e3dfc68f511e957d03e2935f16c4ec81f6907a032. Dec 13 13:16:41.569091 containerd[1734]: time="2024-12-13T13:16:41.568977165Z" level=info msg="StartContainer for \"8b894a26c2f23e98085a57e605969e7ed14cceb4f96ad2f95c812898e9c32387\" returns successfully" Dec 13 13:16:41.569874 containerd[1734]: time="2024-12-13T13:16:41.569512286Z" level=info msg="StartContainer for \"11fb4aa294e416af0d7b1342cd16818292a6e51c17e1a6d6cc2132065f471c7c\" returns successfully" Dec 13 13:16:41.598182 containerd[1734]: time="2024-12-13T13:16:41.597602002Z" level=info msg="StartContainer for \"1e5efbc6315a688f77aaab5e3dfc68f511e957d03e2935f16c4ec81f6907a032\" returns successfully" Dec 13 13:16:42.302021 kubelet[2989]: I1213 13:16:42.301464 2989 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186.0.0-a-128d80e197" Dec 13 13:16:43.941421 kubelet[2989]: E1213 13:16:43.941376 2989 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4186.0.0-a-128d80e197\" not found" node="ci-4186.0.0-a-128d80e197" Dec 13 13:16:43.988539 kubelet[2989]: I1213 13:16:43.988247 2989 kubelet_node_status.go:76] "Successfully registered node" node="ci-4186.0.0-a-128d80e197" Dec 13 13:16:44.173855 kubelet[2989]: I1213 13:16:44.173613 2989 apiserver.go:52] "Watching apiserver" Dec 13 13:16:44.188465 kubelet[2989]: I1213 13:16:44.188430 2989 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 13:16:46.038526 systemd[1]: Reloading requested from client PID 3268 ('systemctl') (unit session-9.scope)... Dec 13 13:16:46.038544 systemd[1]: Reloading... Dec 13 13:16:46.132224 zram_generator::config[3311]: No configuration found. Dec 13 13:16:46.261490 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:16:46.355173 systemd[1]: Reloading finished in 316 ms. Dec 13 13:16:46.396023 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:16:46.407565 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 13:16:46.407888 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:16:46.415580 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:16:46.551396 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:16:46.558958 (kubelet)[3371]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 13:16:46.852278 kubelet[3371]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 13:16:46.852278 kubelet[3371]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 13:16:46.852278 kubelet[3371]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 13:16:46.852278 kubelet[3371]: I1213 13:16:46.616085 3371 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 13:16:46.852278 kubelet[3371]: I1213 13:16:46.621888 3371 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 13:16:46.852278 kubelet[3371]: I1213 13:16:46.621923 3371 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 13:16:46.852278 kubelet[3371]: I1213 13:16:46.622260 3371 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 13:16:46.855745 kubelet[3371]: I1213 13:16:46.853287 3371 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 13:16:46.855745 kubelet[3371]: I1213 13:16:46.854929 3371 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 13:16:46.864883 kubelet[3371]: I1213 13:16:46.864830 3371 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 13:16:46.865062 kubelet[3371]: I1213 13:16:46.865027 3371 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 13:16:46.865277 kubelet[3371]: I1213 13:16:46.865059 3371 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4186.0.0-a-128d80e197","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 13:16:46.865374 kubelet[3371]: I1213 13:16:46.865282 3371 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 13:16:46.865374 kubelet[3371]: I1213 13:16:46.865291 3371 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 13:16:46.865374 kubelet[3371]: I1213 13:16:46.865331 3371 state_mem.go:36] "Initialized new in-memory state store" Dec 13 13:16:46.865469 kubelet[3371]: I1213 13:16:46.865448 3371 kubelet.go:400] "Attempting to sync node with API server" Dec 13 13:16:46.865469 kubelet[3371]: I1213 13:16:46.865465 3371 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 13:16:46.865517 kubelet[3371]: I1213 13:16:46.865494 3371 kubelet.go:312] "Adding apiserver pod source" Dec 13 13:16:46.865517 kubelet[3371]: I1213 13:16:46.865512 3371 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 13:16:46.867697 kubelet[3371]: I1213 13:16:46.867667 3371 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Dec 13 13:16:46.867868 kubelet[3371]: I1213 13:16:46.867848 3371 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 13:16:46.868392 kubelet[3371]: I1213 13:16:46.868295 3371 server.go:1264] "Started kubelet" Dec 13 13:16:46.871670 kubelet[3371]: I1213 13:16:46.870111 3371 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 13:16:46.877225 kubelet[3371]: I1213 13:16:46.875714 3371 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 13:16:46.877225 kubelet[3371]: I1213 13:16:46.876774 3371 server.go:455] "Adding debug handlers to kubelet server" Dec 13 13:16:46.879858 kubelet[3371]: I1213 13:16:46.879790 3371 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 13:16:46.882135 kubelet[3371]: I1213 13:16:46.880012 3371 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 13:16:46.885327 kubelet[3371]: I1213 13:16:46.885298 3371 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 13:16:46.890240 kubelet[3371]: I1213 13:16:46.890180 3371 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 13:16:46.891208 kubelet[3371]: I1213 13:16:46.890360 3371 reconciler.go:26] "Reconciler: start to sync state" Dec 13 13:16:46.893086 kubelet[3371]: I1213 13:16:46.893034 3371 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 13:16:46.896627 kubelet[3371]: I1213 13:16:46.896580 3371 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 13:16:46.896627 kubelet[3371]: I1213 13:16:46.896635 3371 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 13:16:46.896774 kubelet[3371]: I1213 13:16:46.896655 3371 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 13:16:46.896774 kubelet[3371]: E1213 13:16:46.896700 3371 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 13:16:46.903578 kubelet[3371]: I1213 13:16:46.903544 3371 factory.go:221] Registration of the systemd container factory successfully Dec 13 13:16:46.903836 kubelet[3371]: I1213 13:16:46.903812 3371 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 13:16:46.912319 kubelet[3371]: I1213 13:16:46.912288 3371 factory.go:221] Registration of the containerd container factory successfully Dec 13 13:16:46.915310 kubelet[3371]: E1213 13:16:46.915261 3371 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 13:16:46.974626 kubelet[3371]: I1213 13:16:46.974592 3371 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 13:16:46.974626 kubelet[3371]: I1213 13:16:46.974615 3371 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 13:16:46.974787 kubelet[3371]: I1213 13:16:46.974639 3371 state_mem.go:36] "Initialized new in-memory state store" Dec 13 13:16:46.974833 kubelet[3371]: I1213 13:16:46.974810 3371 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 13:16:46.974862 kubelet[3371]: I1213 13:16:46.974830 3371 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 13:16:46.974862 kubelet[3371]: I1213 13:16:46.974850 3371 policy_none.go:49] "None policy: Start" Dec 13 13:16:46.975732 kubelet[3371]: I1213 13:16:46.975710 3371 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 13:16:46.975810 kubelet[3371]: I1213 13:16:46.975741 3371 state_mem.go:35] "Initializing new in-memory state store" Dec 13 13:16:46.975912 kubelet[3371]: I1213 13:16:46.975892 3371 state_mem.go:75] "Updated machine memory state" Dec 13 13:16:46.980703 kubelet[3371]: I1213 13:16:46.980308 3371 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 13:16:46.980703 kubelet[3371]: I1213 13:16:46.980487 3371 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 13:16:46.980703 kubelet[3371]: I1213 13:16:46.980605 3371 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 13:16:46.991503 kubelet[3371]: I1213 13:16:46.991476 3371 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186.0.0-a-128d80e197" Dec 13 13:16:46.997302 kubelet[3371]: I1213 13:16:46.997068 3371 topology_manager.go:215] "Topology Admit Handler" podUID="4c3be0cdb34a6fc69fdcb17e2a61aaec" podNamespace="kube-system" podName="kube-apiserver-ci-4186.0.0-a-128d80e197" Dec 13 13:16:46.997302 kubelet[3371]: I1213 13:16:46.997244 3371 topology_manager.go:215] "Topology Admit Handler" podUID="b418c78d5d8db69dcf91fe3a46823890" podNamespace="kube-system" podName="kube-controller-manager-ci-4186.0.0-a-128d80e197" Dec 13 13:16:46.997471 kubelet[3371]: I1213 13:16:46.997314 3371 topology_manager.go:215] "Topology Admit Handler" podUID="3c742519b8f8c3761a62aa0c322fc800" podNamespace="kube-system" podName="kube-scheduler-ci-4186.0.0-a-128d80e197" Dec 13 13:16:47.014360 kubelet[3371]: W1213 13:16:47.013548 3371 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 13:16:47.014662 kubelet[3371]: I1213 13:16:47.014619 3371 kubelet_node_status.go:112] "Node was previously registered" node="ci-4186.0.0-a-128d80e197" Dec 13 13:16:47.014761 kubelet[3371]: I1213 13:16:47.014741 3371 kubelet_node_status.go:76] "Successfully registered node" node="ci-4186.0.0-a-128d80e197" Dec 13 13:16:47.017040 kubelet[3371]: W1213 13:16:47.016893 3371 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 13:16:47.019315 kubelet[3371]: W1213 13:16:47.019285 3371 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 13:16:47.052355 sudo[3403]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 13 13:16:47.052651 sudo[3403]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Dec 13 13:16:47.092302 kubelet[3371]: I1213 13:16:47.091887 3371 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4c3be0cdb34a6fc69fdcb17e2a61aaec-ca-certs\") pod \"kube-apiserver-ci-4186.0.0-a-128d80e197\" (UID: \"4c3be0cdb34a6fc69fdcb17e2a61aaec\") " pod="kube-system/kube-apiserver-ci-4186.0.0-a-128d80e197" Dec 13 13:16:47.092302 kubelet[3371]: I1213 13:16:47.091930 3371 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4c3be0cdb34a6fc69fdcb17e2a61aaec-k8s-certs\") pod \"kube-apiserver-ci-4186.0.0-a-128d80e197\" (UID: \"4c3be0cdb34a6fc69fdcb17e2a61aaec\") " pod="kube-system/kube-apiserver-ci-4186.0.0-a-128d80e197" Dec 13 13:16:47.092302 kubelet[3371]: I1213 13:16:47.091951 3371 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4c3be0cdb34a6fc69fdcb17e2a61aaec-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4186.0.0-a-128d80e197\" (UID: \"4c3be0cdb34a6fc69fdcb17e2a61aaec\") " pod="kube-system/kube-apiserver-ci-4186.0.0-a-128d80e197" Dec 13 13:16:47.092302 kubelet[3371]: I1213 13:16:47.091968 3371 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b418c78d5d8db69dcf91fe3a46823890-flexvolume-dir\") pod \"kube-controller-manager-ci-4186.0.0-a-128d80e197\" (UID: \"b418c78d5d8db69dcf91fe3a46823890\") " pod="kube-system/kube-controller-manager-ci-4186.0.0-a-128d80e197" Dec 13 13:16:47.092302 kubelet[3371]: I1213 13:16:47.091985 3371 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b418c78d5d8db69dcf91fe3a46823890-k8s-certs\") pod \"kube-controller-manager-ci-4186.0.0-a-128d80e197\" (UID: \"b418c78d5d8db69dcf91fe3a46823890\") " pod="kube-system/kube-controller-manager-ci-4186.0.0-a-128d80e197" Dec 13 13:16:47.092535 kubelet[3371]: I1213 13:16:47.092000 3371 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b418c78d5d8db69dcf91fe3a46823890-kubeconfig\") pod \"kube-controller-manager-ci-4186.0.0-a-128d80e197\" (UID: \"b418c78d5d8db69dcf91fe3a46823890\") " pod="kube-system/kube-controller-manager-ci-4186.0.0-a-128d80e197" Dec 13 13:16:47.092535 kubelet[3371]: I1213 13:16:47.092016 3371 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b418c78d5d8db69dcf91fe3a46823890-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4186.0.0-a-128d80e197\" (UID: \"b418c78d5d8db69dcf91fe3a46823890\") " pod="kube-system/kube-controller-manager-ci-4186.0.0-a-128d80e197" Dec 13 13:16:47.092535 kubelet[3371]: I1213 13:16:47.092032 3371 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b418c78d5d8db69dcf91fe3a46823890-ca-certs\") pod \"kube-controller-manager-ci-4186.0.0-a-128d80e197\" (UID: \"b418c78d5d8db69dcf91fe3a46823890\") " pod="kube-system/kube-controller-manager-ci-4186.0.0-a-128d80e197" Dec 13 13:16:47.092535 kubelet[3371]: I1213 13:16:47.092050 3371 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3c742519b8f8c3761a62aa0c322fc800-kubeconfig\") pod \"kube-scheduler-ci-4186.0.0-a-128d80e197\" (UID: \"3c742519b8f8c3761a62aa0c322fc800\") " pod="kube-system/kube-scheduler-ci-4186.0.0-a-128d80e197" Dec 13 13:16:47.517242 sudo[3403]: pam_unix(sudo:session): session closed for user root Dec 13 13:16:47.866922 kubelet[3371]: I1213 13:16:47.866886 3371 apiserver.go:52] "Watching apiserver" Dec 13 13:16:47.890436 kubelet[3371]: I1213 13:16:47.890398 3371 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 13:16:47.973915 kubelet[3371]: I1213 13:16:47.973851 3371 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4186.0.0-a-128d80e197" podStartSLOduration=0.9738316 podStartE2EDuration="973.8316ms" podCreationTimestamp="2024-12-13 13:16:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:16:47.952943697 +0000 UTC m=+1.390246054" watchObservedRunningTime="2024-12-13 13:16:47.9738316 +0000 UTC m=+1.411133997" Dec 13 13:16:47.974082 kubelet[3371]: I1213 13:16:47.973969 3371 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4186.0.0-a-128d80e197" podStartSLOduration=0.97396536 podStartE2EDuration="973.96536ms" podCreationTimestamp="2024-12-13 13:16:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:16:47.972148518 +0000 UTC m=+1.409450915" watchObservedRunningTime="2024-12-13 13:16:47.97396536 +0000 UTC m=+1.411267757" Dec 13 13:16:48.006504 kubelet[3371]: I1213 13:16:48.006438 3371 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4186.0.0-a-128d80e197" podStartSLOduration=1.006418155 podStartE2EDuration="1.006418155s" podCreationTimestamp="2024-12-13 13:16:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:16:47.990955498 +0000 UTC m=+1.428257935" watchObservedRunningTime="2024-12-13 13:16:48.006418155 +0000 UTC m=+1.443720552" Dec 13 13:16:49.031090 sudo[2365]: pam_unix(sudo:session): session closed for user root Dec 13 13:16:49.109164 sshd[2364]: Connection closed by 10.200.16.10 port 52754 Dec 13 13:16:49.109740 sshd-session[2362]: pam_unix(sshd:session): session closed for user core Dec 13 13:16:49.113843 systemd[1]: sshd@6-10.200.20.34:22-10.200.16.10:52754.service: Deactivated successfully. Dec 13 13:16:49.115798 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 13:16:49.116080 systemd[1]: session-9.scope: Consumed 7.158s CPU time, 192.2M memory peak, 0B memory swap peak. Dec 13 13:16:49.116686 systemd-logind[1699]: Session 9 logged out. Waiting for processes to exit. Dec 13 13:16:49.117645 systemd-logind[1699]: Removed session 9. Dec 13 13:16:59.684014 kubelet[3371]: I1213 13:16:59.683919 3371 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 13:16:59.685340 kubelet[3371]: I1213 13:16:59.684666 3371 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 13:16:59.685408 containerd[1734]: time="2024-12-13T13:16:59.684371124Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 13:17:00.629960 kubelet[3371]: I1213 13:17:00.629909 3371 topology_manager.go:215] "Topology Admit Handler" podUID="8586645d-c092-47bf-a9c8-56dd9e82e2c7" podNamespace="kube-system" podName="cilium-5b4vd" Dec 13 13:17:00.633919 kubelet[3371]: I1213 13:17:00.633863 3371 topology_manager.go:215] "Topology Admit Handler" podUID="25a6517f-0b3f-4585-95d1-8a94a5df1c15" podNamespace="kube-system" podName="kube-proxy-9xcs2" Dec 13 13:17:00.642866 systemd[1]: Created slice kubepods-burstable-pod8586645d_c092_47bf_a9c8_56dd9e82e2c7.slice - libcontainer container kubepods-burstable-pod8586645d_c092_47bf_a9c8_56dd9e82e2c7.slice. Dec 13 13:17:00.652848 systemd[1]: Created slice kubepods-besteffort-pod25a6517f_0b3f_4585_95d1_8a94a5df1c15.slice - libcontainer container kubepods-besteffort-pod25a6517f_0b3f_4585_95d1_8a94a5df1c15.slice. Dec 13 13:17:00.679063 kubelet[3371]: I1213 13:17:00.679014 3371 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8586645d-c092-47bf-a9c8-56dd9e82e2c7-cilium-run\") pod \"cilium-5b4vd\" (UID: \"8586645d-c092-47bf-a9c8-56dd9e82e2c7\") " pod="kube-system/cilium-5b4vd" Dec 13 13:17:00.679063 kubelet[3371]: I1213 13:17:00.679061 3371 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8586645d-c092-47bf-a9c8-56dd9e82e2c7-bpf-maps\") pod \"cilium-5b4vd\" (UID: \"8586645d-c092-47bf-a9c8-56dd9e82e2c7\") " pod="kube-system/cilium-5b4vd" Dec 13 13:17:00.679250 kubelet[3371]: I1213 13:17:00.679081 3371 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8586645d-c092-47bf-a9c8-56dd9e82e2c7-cni-path\") pod \"cilium-5b4vd\" (UID: \"8586645d-c092-47bf-a9c8-56dd9e82e2c7\") " pod="kube-system/cilium-5b4vd" Dec 13 13:17:00.679250 kubelet[3371]: I1213 13:17:00.679096 3371 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8586645d-c092-47bf-a9c8-56dd9e82e2c7-hubble-tls\") pod \"cilium-5b4vd\" (UID: \"8586645d-c092-47bf-a9c8-56dd9e82e2c7\") " pod="kube-system/cilium-5b4vd" Dec 13 13:17:00.679250 kubelet[3371]: I1213 13:17:00.679113 3371 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/25a6517f-0b3f-4585-95d1-8a94a5df1c15-lib-modules\") pod \"kube-proxy-9xcs2\" (UID: \"25a6517f-0b3f-4585-95d1-8a94a5df1c15\") " pod="kube-system/kube-proxy-9xcs2" Dec 13 13:17:00.679250 kubelet[3371]: I1213 13:17:00.679144 3371 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8586645d-c092-47bf-a9c8-56dd9e82e2c7-host-proc-sys-net\") pod \"cilium-5b4vd\" (UID: \"8586645d-c092-47bf-a9c8-56dd9e82e2c7\") " pod="kube-system/cilium-5b4vd" Dec 13 13:17:00.679250 kubelet[3371]: I1213 13:17:00.679161 3371 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8586645d-c092-47bf-a9c8-56dd9e82e2c7-host-proc-sys-kernel\") pod \"cilium-5b4vd\" (UID: \"8586645d-c092-47bf-a9c8-56dd9e82e2c7\") " pod="kube-system/cilium-5b4vd" Dec 13 13:17:00.679250 kubelet[3371]: I1213 13:17:00.679177 3371 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/25a6517f-0b3f-4585-95d1-8a94a5df1c15-xtables-lock\") pod \"kube-proxy-9xcs2\" (UID: \"25a6517f-0b3f-4585-95d1-8a94a5df1c15\") " pod="kube-system/kube-proxy-9xcs2" Dec 13 13:17:00.679376 kubelet[3371]: I1213 13:17:00.679192 3371 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8586645d-c092-47bf-a9c8-56dd9e82e2c7-lib-modules\") pod \"cilium-5b4vd\" (UID: \"8586645d-c092-47bf-a9c8-56dd9e82e2c7\") " pod="kube-system/cilium-5b4vd" Dec 13 13:17:00.679376 kubelet[3371]: I1213 13:17:00.679206 3371 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gjkc\" (UniqueName: \"kubernetes.io/projected/25a6517f-0b3f-4585-95d1-8a94a5df1c15-kube-api-access-9gjkc\") pod \"kube-proxy-9xcs2\" (UID: \"25a6517f-0b3f-4585-95d1-8a94a5df1c15\") " pod="kube-system/kube-proxy-9xcs2" Dec 13 13:17:00.679376 kubelet[3371]: I1213 13:17:00.679227 3371 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8586645d-c092-47bf-a9c8-56dd9e82e2c7-clustermesh-secrets\") pod \"cilium-5b4vd\" (UID: \"8586645d-c092-47bf-a9c8-56dd9e82e2c7\") " pod="kube-system/cilium-5b4vd" Dec 13 13:17:00.679376 kubelet[3371]: I1213 13:17:00.679244 3371 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/25a6517f-0b3f-4585-95d1-8a94a5df1c15-kube-proxy\") pod \"kube-proxy-9xcs2\" (UID: \"25a6517f-0b3f-4585-95d1-8a94a5df1c15\") " pod="kube-system/kube-proxy-9xcs2" Dec 13 13:17:00.679376 kubelet[3371]: I1213 13:17:00.679259 3371 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8586645d-c092-47bf-a9c8-56dd9e82e2c7-cilium-cgroup\") pod \"cilium-5b4vd\" (UID: \"8586645d-c092-47bf-a9c8-56dd9e82e2c7\") " pod="kube-system/cilium-5b4vd" Dec 13 13:17:00.679476 kubelet[3371]: I1213 13:17:00.679286 3371 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xr8j\" (UniqueName: \"kubernetes.io/projected/8586645d-c092-47bf-a9c8-56dd9e82e2c7-kube-api-access-8xr8j\") pod \"cilium-5b4vd\" (UID: \"8586645d-c092-47bf-a9c8-56dd9e82e2c7\") " pod="kube-system/cilium-5b4vd" Dec 13 13:17:00.679476 kubelet[3371]: I1213 13:17:00.679300 3371 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8586645d-c092-47bf-a9c8-56dd9e82e2c7-xtables-lock\") pod \"cilium-5b4vd\" (UID: \"8586645d-c092-47bf-a9c8-56dd9e82e2c7\") " pod="kube-system/cilium-5b4vd" Dec 13 13:17:00.679476 kubelet[3371]: I1213 13:17:00.679313 3371 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8586645d-c092-47bf-a9c8-56dd9e82e2c7-cilium-config-path\") pod \"cilium-5b4vd\" (UID: \"8586645d-c092-47bf-a9c8-56dd9e82e2c7\") " pod="kube-system/cilium-5b4vd" Dec 13 13:17:00.679476 kubelet[3371]: I1213 13:17:00.679330 3371 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8586645d-c092-47bf-a9c8-56dd9e82e2c7-hostproc\") pod \"cilium-5b4vd\" (UID: \"8586645d-c092-47bf-a9c8-56dd9e82e2c7\") " pod="kube-system/cilium-5b4vd" Dec 13 13:17:00.679476 kubelet[3371]: I1213 13:17:00.679346 3371 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8586645d-c092-47bf-a9c8-56dd9e82e2c7-etc-cni-netd\") pod \"cilium-5b4vd\" (UID: \"8586645d-c092-47bf-a9c8-56dd9e82e2c7\") " pod="kube-system/cilium-5b4vd" Dec 13 13:17:00.860084 kubelet[3371]: I1213 13:17:00.858456 3371 topology_manager.go:215] "Topology Admit Handler" podUID="bfdc435f-db7f-4c33-ae6b-9b68ec6f47be" podNamespace="kube-system" podName="cilium-operator-599987898-hgtch" Dec 13 13:17:00.869248 systemd[1]: Created slice kubepods-besteffort-podbfdc435f_db7f_4c33_ae6b_9b68ec6f47be.slice - libcontainer container kubepods-besteffort-podbfdc435f_db7f_4c33_ae6b_9b68ec6f47be.slice. Dec 13 13:17:00.881145 kubelet[3371]: I1213 13:17:00.880964 3371 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bfdc435f-db7f-4c33-ae6b-9b68ec6f47be-cilium-config-path\") pod \"cilium-operator-599987898-hgtch\" (UID: \"bfdc435f-db7f-4c33-ae6b-9b68ec6f47be\") " pod="kube-system/cilium-operator-599987898-hgtch" Dec 13 13:17:00.881145 kubelet[3371]: I1213 13:17:00.881014 3371 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbfwp\" (UniqueName: \"kubernetes.io/projected/bfdc435f-db7f-4c33-ae6b-9b68ec6f47be-kube-api-access-xbfwp\") pod \"cilium-operator-599987898-hgtch\" (UID: \"bfdc435f-db7f-4c33-ae6b-9b68ec6f47be\") " pod="kube-system/cilium-operator-599987898-hgtch" Dec 13 13:17:00.951770 containerd[1734]: time="2024-12-13T13:17:00.951659463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5b4vd,Uid:8586645d-c092-47bf-a9c8-56dd9e82e2c7,Namespace:kube-system,Attempt:0,}" Dec 13 13:17:00.963435 containerd[1734]: time="2024-12-13T13:17:00.963103955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9xcs2,Uid:25a6517f-0b3f-4585-95d1-8a94a5df1c15,Namespace:kube-system,Attempt:0,}" Dec 13 13:17:01.012465 containerd[1734]: time="2024-12-13T13:17:01.011938687Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:17:01.012465 containerd[1734]: time="2024-12-13T13:17:01.012105247Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:17:01.012465 containerd[1734]: time="2024-12-13T13:17:01.012188807Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:17:01.012465 containerd[1734]: time="2024-12-13T13:17:01.012370247Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:17:01.013007 containerd[1734]: time="2024-12-13T13:17:01.012896328Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:17:01.013007 containerd[1734]: time="2024-12-13T13:17:01.012970408Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:17:01.013238 containerd[1734]: time="2024-12-13T13:17:01.012987448Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:17:01.013480 containerd[1734]: time="2024-12-13T13:17:01.013349368Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:17:01.031333 systemd[1]: Started cri-containerd-faf95902c8864b2309bcca06f013853c2a54e69b81d17f91ffe4c964e7973e75.scope - libcontainer container faf95902c8864b2309bcca06f013853c2a54e69b81d17f91ffe4c964e7973e75. Dec 13 13:17:01.037496 systemd[1]: Started cri-containerd-8d0a985273b39d435794d8ed2eecfad36fdcc473222613af27569111675fbb26.scope - libcontainer container 8d0a985273b39d435794d8ed2eecfad36fdcc473222613af27569111675fbb26. Dec 13 13:17:01.063778 containerd[1734]: time="2024-12-13T13:17:01.063484701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5b4vd,Uid:8586645d-c092-47bf-a9c8-56dd9e82e2c7,Namespace:kube-system,Attempt:0,} returns sandbox id \"faf95902c8864b2309bcca06f013853c2a54e69b81d17f91ffe4c964e7973e75\"" Dec 13 13:17:01.067951 containerd[1734]: time="2024-12-13T13:17:01.067628826Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 13:17:01.078148 containerd[1734]: time="2024-12-13T13:17:01.077979357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9xcs2,Uid:25a6517f-0b3f-4585-95d1-8a94a5df1c15,Namespace:kube-system,Attempt:0,} returns sandbox id \"8d0a985273b39d435794d8ed2eecfad36fdcc473222613af27569111675fbb26\"" Dec 13 13:17:01.081593 containerd[1734]: time="2024-12-13T13:17:01.081363800Z" level=info msg="CreateContainer within sandbox \"8d0a985273b39d435794d8ed2eecfad36fdcc473222613af27569111675fbb26\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 13:17:01.148844 containerd[1734]: time="2024-12-13T13:17:01.148721151Z" level=info msg="CreateContainer within sandbox \"8d0a985273b39d435794d8ed2eecfad36fdcc473222613af27569111675fbb26\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9d74a855956ce3c2a6f656d2bf0ac3c4b86e477e7242f291b9350085c9eb4374\"" Dec 13 13:17:01.151185 containerd[1734]: time="2024-12-13T13:17:01.149836073Z" level=info msg="StartContainer for \"9d74a855956ce3c2a6f656d2bf0ac3c4b86e477e7242f291b9350085c9eb4374\"" Dec 13 13:17:01.173841 containerd[1734]: time="2024-12-13T13:17:01.173796058Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-hgtch,Uid:bfdc435f-db7f-4c33-ae6b-9b68ec6f47be,Namespace:kube-system,Attempt:0,}" Dec 13 13:17:01.178368 systemd[1]: Started cri-containerd-9d74a855956ce3c2a6f656d2bf0ac3c4b86e477e7242f291b9350085c9eb4374.scope - libcontainer container 9d74a855956ce3c2a6f656d2bf0ac3c4b86e477e7242f291b9350085c9eb4374. Dec 13 13:17:01.210849 containerd[1734]: time="2024-12-13T13:17:01.210801977Z" level=info msg="StartContainer for \"9d74a855956ce3c2a6f656d2bf0ac3c4b86e477e7242f291b9350085c9eb4374\" returns successfully" Dec 13 13:17:01.228376 containerd[1734]: time="2024-12-13T13:17:01.228196355Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:17:01.228376 containerd[1734]: time="2024-12-13T13:17:01.228258275Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:17:01.228376 containerd[1734]: time="2024-12-13T13:17:01.228269155Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:17:01.228654 containerd[1734]: time="2024-12-13T13:17:01.228362155Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:17:01.246457 systemd[1]: Started cri-containerd-2c9ceb3f1bd6d53cee22fce30be309e3697d48140896c04e273d6e3fa3c39006.scope - libcontainer container 2c9ceb3f1bd6d53cee22fce30be309e3697d48140896c04e273d6e3fa3c39006. Dec 13 13:17:01.289546 containerd[1734]: time="2024-12-13T13:17:01.289462140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-hgtch,Uid:bfdc435f-db7f-4c33-ae6b-9b68ec6f47be,Namespace:kube-system,Attempt:0,} returns sandbox id \"2c9ceb3f1bd6d53cee22fce30be309e3697d48140896c04e273d6e3fa3c39006\"" Dec 13 13:17:09.186776 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount792048501.mount: Deactivated successfully. Dec 13 13:17:10.705996 containerd[1734]: time="2024-12-13T13:17:10.705937808Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:17:10.712791 containerd[1734]: time="2024-12-13T13:17:10.712721456Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157651482" Dec 13 13:17:10.714102 containerd[1734]: time="2024-12-13T13:17:10.714052657Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:17:10.716176 containerd[1734]: time="2024-12-13T13:17:10.716038180Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 9.648361274s" Dec 13 13:17:10.716176 containerd[1734]: time="2024-12-13T13:17:10.716072860Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Dec 13 13:17:10.717868 containerd[1734]: time="2024-12-13T13:17:10.717654301Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 13:17:10.720160 containerd[1734]: time="2024-12-13T13:17:10.720024704Z" level=info msg="CreateContainer within sandbox \"faf95902c8864b2309bcca06f013853c2a54e69b81d17f91ffe4c964e7973e75\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 13:17:10.764094 containerd[1734]: time="2024-12-13T13:17:10.764046312Z" level=info msg="CreateContainer within sandbox \"faf95902c8864b2309bcca06f013853c2a54e69b81d17f91ffe4c964e7973e75\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0ee72be1d968b1d144dec1143467b199b1c372090842dd0fc191f349a6efad36\"" Dec 13 13:17:10.765303 containerd[1734]: time="2024-12-13T13:17:10.764888913Z" level=info msg="StartContainer for \"0ee72be1d968b1d144dec1143467b199b1c372090842dd0fc191f349a6efad36\"" Dec 13 13:17:10.795332 systemd[1]: Started cri-containerd-0ee72be1d968b1d144dec1143467b199b1c372090842dd0fc191f349a6efad36.scope - libcontainer container 0ee72be1d968b1d144dec1143467b199b1c372090842dd0fc191f349a6efad36. Dec 13 13:17:10.821457 containerd[1734]: time="2024-12-13T13:17:10.821374096Z" level=info msg="StartContainer for \"0ee72be1d968b1d144dec1143467b199b1c372090842dd0fc191f349a6efad36\" returns successfully" Dec 13 13:17:10.828680 systemd[1]: cri-containerd-0ee72be1d968b1d144dec1143467b199b1c372090842dd0fc191f349a6efad36.scope: Deactivated successfully. Dec 13 13:17:11.750534 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0ee72be1d968b1d144dec1143467b199b1c372090842dd0fc191f349a6efad36-rootfs.mount: Deactivated successfully. Dec 13 13:17:12.024839 kubelet[3371]: I1213 13:17:12.024352 3371 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9xcs2" podStartSLOduration=12.024332981 podStartE2EDuration="12.024332981s" podCreationTimestamp="2024-12-13 13:17:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:17:01.997553288 +0000 UTC m=+15.434855685" watchObservedRunningTime="2024-12-13 13:17:12.024332981 +0000 UTC m=+25.461635418" Dec 13 13:17:12.672829 containerd[1734]: time="2024-12-13T13:17:12.672766256Z" level=info msg="shim disconnected" id=0ee72be1d968b1d144dec1143467b199b1c372090842dd0fc191f349a6efad36 namespace=k8s.io Dec 13 13:17:12.672829 containerd[1734]: time="2024-12-13T13:17:12.672821856Z" level=warning msg="cleaning up after shim disconnected" id=0ee72be1d968b1d144dec1143467b199b1c372090842dd0fc191f349a6efad36 namespace=k8s.io Dec 13 13:17:12.673276 containerd[1734]: time="2024-12-13T13:17:12.672845696Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:17:13.012798 containerd[1734]: time="2024-12-13T13:17:13.012428830Z" level=info msg="CreateContainer within sandbox \"faf95902c8864b2309bcca06f013853c2a54e69b81d17f91ffe4c964e7973e75\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 13:17:13.045317 containerd[1734]: time="2024-12-13T13:17:13.045267946Z" level=info msg="CreateContainer within sandbox \"faf95902c8864b2309bcca06f013853c2a54e69b81d17f91ffe4c964e7973e75\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6f115bc0544ae7d2ab02bee88d5e4b8d8b6ee812eb2dcff8e1dd54971807b4a3\"" Dec 13 13:17:13.046010 containerd[1734]: time="2024-12-13T13:17:13.045769706Z" level=info msg="StartContainer for \"6f115bc0544ae7d2ab02bee88d5e4b8d8b6ee812eb2dcff8e1dd54971807b4a3\"" Dec 13 13:17:13.077369 systemd[1]: Started cri-containerd-6f115bc0544ae7d2ab02bee88d5e4b8d8b6ee812eb2dcff8e1dd54971807b4a3.scope - libcontainer container 6f115bc0544ae7d2ab02bee88d5e4b8d8b6ee812eb2dcff8e1dd54971807b4a3. Dec 13 13:17:13.107476 containerd[1734]: time="2024-12-13T13:17:13.106331853Z" level=info msg="StartContainer for \"6f115bc0544ae7d2ab02bee88d5e4b8d8b6ee812eb2dcff8e1dd54971807b4a3\" returns successfully" Dec 13 13:17:13.112266 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 13:17:13.112476 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:17:13.112547 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Dec 13 13:17:13.118057 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 13:17:13.118490 systemd[1]: cri-containerd-6f115bc0544ae7d2ab02bee88d5e4b8d8b6ee812eb2dcff8e1dd54971807b4a3.scope: Deactivated successfully. Dec 13 13:17:13.135242 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:17:13.142254 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6f115bc0544ae7d2ab02bee88d5e4b8d8b6ee812eb2dcff8e1dd54971807b4a3-rootfs.mount: Deactivated successfully. Dec 13 13:17:13.164230 containerd[1734]: time="2024-12-13T13:17:13.164095277Z" level=info msg="shim disconnected" id=6f115bc0544ae7d2ab02bee88d5e4b8d8b6ee812eb2dcff8e1dd54971807b4a3 namespace=k8s.io Dec 13 13:17:13.164230 containerd[1734]: time="2024-12-13T13:17:13.164183117Z" level=warning msg="cleaning up after shim disconnected" id=6f115bc0544ae7d2ab02bee88d5e4b8d8b6ee812eb2dcff8e1dd54971807b4a3 namespace=k8s.io Dec 13 13:17:13.164230 containerd[1734]: time="2024-12-13T13:17:13.164191757Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:17:14.016383 containerd[1734]: time="2024-12-13T13:17:14.015684535Z" level=info msg="CreateContainer within sandbox \"faf95902c8864b2309bcca06f013853c2a54e69b81d17f91ffe4c964e7973e75\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 13:17:14.049371 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2154242426.mount: Deactivated successfully. Dec 13 13:17:14.059715 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1959398627.mount: Deactivated successfully. Dec 13 13:17:14.077270 containerd[1734]: time="2024-12-13T13:17:14.077216403Z" level=info msg="CreateContainer within sandbox \"faf95902c8864b2309bcca06f013853c2a54e69b81d17f91ffe4c964e7973e75\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"791baa6075bf070d06b7f3c8c558daa2bc98d7fdef628894a2c63ca5a06e8d61\"" Dec 13 13:17:14.080286 containerd[1734]: time="2024-12-13T13:17:14.080231646Z" level=info msg="StartContainer for \"791baa6075bf070d06b7f3c8c558daa2bc98d7fdef628894a2c63ca5a06e8d61\"" Dec 13 13:17:14.116330 systemd[1]: Started cri-containerd-791baa6075bf070d06b7f3c8c558daa2bc98d7fdef628894a2c63ca5a06e8d61.scope - libcontainer container 791baa6075bf070d06b7f3c8c558daa2bc98d7fdef628894a2c63ca5a06e8d61. Dec 13 13:17:14.148956 systemd[1]: cri-containerd-791baa6075bf070d06b7f3c8c558daa2bc98d7fdef628894a2c63ca5a06e8d61.scope: Deactivated successfully. Dec 13 13:17:14.158205 containerd[1734]: time="2024-12-13T13:17:14.158071172Z" level=info msg="StartContainer for \"791baa6075bf070d06b7f3c8c558daa2bc98d7fdef628894a2c63ca5a06e8d61\" returns successfully" Dec 13 13:17:14.210292 containerd[1734]: time="2024-12-13T13:17:14.210196869Z" level=info msg="shim disconnected" id=791baa6075bf070d06b7f3c8c558daa2bc98d7fdef628894a2c63ca5a06e8d61 namespace=k8s.io Dec 13 13:17:14.210292 containerd[1734]: time="2024-12-13T13:17:14.210278750Z" level=warning msg="cleaning up after shim disconnected" id=791baa6075bf070d06b7f3c8c558daa2bc98d7fdef628894a2c63ca5a06e8d61 namespace=k8s.io Dec 13 13:17:14.210292 containerd[1734]: time="2024-12-13T13:17:14.210289390Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:17:14.732923 containerd[1734]: time="2024-12-13T13:17:14.732860805Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:17:14.735258 containerd[1734]: time="2024-12-13T13:17:14.735059488Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17137726" Dec 13 13:17:14.737638 containerd[1734]: time="2024-12-13T13:17:14.737573011Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:17:14.739168 containerd[1734]: time="2024-12-13T13:17:14.739010812Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 4.021325151s" Dec 13 13:17:14.739168 containerd[1734]: time="2024-12-13T13:17:14.739050492Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Dec 13 13:17:14.742682 containerd[1734]: time="2024-12-13T13:17:14.742628416Z" level=info msg="CreateContainer within sandbox \"2c9ceb3f1bd6d53cee22fce30be309e3697d48140896c04e273d6e3fa3c39006\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 13:17:14.769236 containerd[1734]: time="2024-12-13T13:17:14.769117485Z" level=info msg="CreateContainer within sandbox \"2c9ceb3f1bd6d53cee22fce30be309e3697d48140896c04e273d6e3fa3c39006\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"dc6825393fcf4bdaf8ad72acbaa9d60d7f008247900d872ecb0c909d2e79b7f4\"" Dec 13 13:17:14.770013 containerd[1734]: time="2024-12-13T13:17:14.769974406Z" level=info msg="StartContainer for \"dc6825393fcf4bdaf8ad72acbaa9d60d7f008247900d872ecb0c909d2e79b7f4\"" Dec 13 13:17:14.797351 systemd[1]: Started cri-containerd-dc6825393fcf4bdaf8ad72acbaa9d60d7f008247900d872ecb0c909d2e79b7f4.scope - libcontainer container dc6825393fcf4bdaf8ad72acbaa9d60d7f008247900d872ecb0c909d2e79b7f4. Dec 13 13:17:14.823267 containerd[1734]: time="2024-12-13T13:17:14.823212665Z" level=info msg="StartContainer for \"dc6825393fcf4bdaf8ad72acbaa9d60d7f008247900d872ecb0c909d2e79b7f4\" returns successfully" Dec 13 13:17:15.023393 containerd[1734]: time="2024-12-13T13:17:15.023084845Z" level=info msg="CreateContainer within sandbox \"faf95902c8864b2309bcca06f013853c2a54e69b81d17f91ffe4c964e7973e75\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 13:17:15.053524 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-791baa6075bf070d06b7f3c8c558daa2bc98d7fdef628894a2c63ca5a06e8d61-rootfs.mount: Deactivated successfully. Dec 13 13:17:15.062723 containerd[1734]: time="2024-12-13T13:17:15.062659569Z" level=info msg="CreateContainer within sandbox \"faf95902c8864b2309bcca06f013853c2a54e69b81d17f91ffe4c964e7973e75\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8a80b64fcd5d272d222210400e5e9bf35b082fcf8fa80eba5259d6e1dcad39d4\"" Dec 13 13:17:15.063651 containerd[1734]: time="2024-12-13T13:17:15.063610450Z" level=info msg="StartContainer for \"8a80b64fcd5d272d222210400e5e9bf35b082fcf8fa80eba5259d6e1dcad39d4\"" Dec 13 13:17:15.112371 systemd[1]: Started cri-containerd-8a80b64fcd5d272d222210400e5e9bf35b082fcf8fa80eba5259d6e1dcad39d4.scope - libcontainer container 8a80b64fcd5d272d222210400e5e9bf35b082fcf8fa80eba5259d6e1dcad39d4. Dec 13 13:17:15.159533 systemd[1]: cri-containerd-8a80b64fcd5d272d222210400e5e9bf35b082fcf8fa80eba5259d6e1dcad39d4.scope: Deactivated successfully. Dec 13 13:17:15.171576 containerd[1734]: time="2024-12-13T13:17:15.171522929Z" level=info msg="StartContainer for \"8a80b64fcd5d272d222210400e5e9bf35b082fcf8fa80eba5259d6e1dcad39d4\" returns successfully" Dec 13 13:17:15.564066 containerd[1734]: time="2024-12-13T13:17:15.563976601Z" level=info msg="shim disconnected" id=8a80b64fcd5d272d222210400e5e9bf35b082fcf8fa80eba5259d6e1dcad39d4 namespace=k8s.io Dec 13 13:17:15.564066 containerd[1734]: time="2024-12-13T13:17:15.564056561Z" level=warning msg="cleaning up after shim disconnected" id=8a80b64fcd5d272d222210400e5e9bf35b082fcf8fa80eba5259d6e1dcad39d4 namespace=k8s.io Dec 13 13:17:15.564066 containerd[1734]: time="2024-12-13T13:17:15.564066361Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:17:16.036258 containerd[1734]: time="2024-12-13T13:17:16.036194921Z" level=info msg="CreateContainer within sandbox \"faf95902c8864b2309bcca06f013853c2a54e69b81d17f91ffe4c964e7973e75\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 13:17:16.049409 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8a80b64fcd5d272d222210400e5e9bf35b082fcf8fa80eba5259d6e1dcad39d4-rootfs.mount: Deactivated successfully. Dec 13 13:17:16.056273 kubelet[3371]: I1213 13:17:16.056020 3371 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-hgtch" podStartSLOduration=2.608501793 podStartE2EDuration="16.055989783s" podCreationTimestamp="2024-12-13 13:17:00 +0000 UTC" firstStartedPulling="2024-12-13 13:17:01.292507543 +0000 UTC m=+14.729809900" lastFinishedPulling="2024-12-13 13:17:14.739995493 +0000 UTC m=+28.177297890" observedRunningTime="2024-12-13 13:17:15.0906352 +0000 UTC m=+28.527937597" watchObservedRunningTime="2024-12-13 13:17:16.055989783 +0000 UTC m=+29.493292180" Dec 13 13:17:16.099730 containerd[1734]: time="2024-12-13T13:17:16.099633951Z" level=info msg="CreateContainer within sandbox \"faf95902c8864b2309bcca06f013853c2a54e69b81d17f91ffe4c964e7973e75\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"99feda62a869506fc48b24dcb9888912a46943a65744ea78a1d3f39783a0d6d1\"" Dec 13 13:17:16.101203 containerd[1734]: time="2024-12-13T13:17:16.100377952Z" level=info msg="StartContainer for \"99feda62a869506fc48b24dcb9888912a46943a65744ea78a1d3f39783a0d6d1\"" Dec 13 13:17:16.127348 systemd[1]: Started cri-containerd-99feda62a869506fc48b24dcb9888912a46943a65744ea78a1d3f39783a0d6d1.scope - libcontainer container 99feda62a869506fc48b24dcb9888912a46943a65744ea78a1d3f39783a0d6d1. Dec 13 13:17:16.158094 containerd[1734]: time="2024-12-13T13:17:16.158029816Z" level=info msg="StartContainer for \"99feda62a869506fc48b24dcb9888912a46943a65744ea78a1d3f39783a0d6d1\" returns successfully" Dec 13 13:17:16.239644 kubelet[3371]: I1213 13:17:16.239605 3371 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 13:17:16.272748 kubelet[3371]: I1213 13:17:16.272698 3371 topology_manager.go:215] "Topology Admit Handler" podUID="67ea676a-e60f-4a1b-87e6-2b99ebd86137" podNamespace="kube-system" podName="coredns-7db6d8ff4d-gwmh8" Dec 13 13:17:16.278438 kubelet[3371]: I1213 13:17:16.278393 3371 topology_manager.go:215] "Topology Admit Handler" podUID="7f310cac-8738-466a-980d-e11c5846c880" podNamespace="kube-system" podName="coredns-7db6d8ff4d-6p2ts" Dec 13 13:17:16.282659 systemd[1]: Created slice kubepods-burstable-pod67ea676a_e60f_4a1b_87e6_2b99ebd86137.slice - libcontainer container kubepods-burstable-pod67ea676a_e60f_4a1b_87e6_2b99ebd86137.slice. Dec 13 13:17:16.293628 systemd[1]: Created slice kubepods-burstable-pod7f310cac_8738_466a_980d_e11c5846c880.slice - libcontainer container kubepods-burstable-pod7f310cac_8738_466a_980d_e11c5846c880.slice. Dec 13 13:17:16.384431 kubelet[3371]: I1213 13:17:16.384387 3371 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/67ea676a-e60f-4a1b-87e6-2b99ebd86137-config-volume\") pod \"coredns-7db6d8ff4d-gwmh8\" (UID: \"67ea676a-e60f-4a1b-87e6-2b99ebd86137\") " pod="kube-system/coredns-7db6d8ff4d-gwmh8" Dec 13 13:17:16.384431 kubelet[3371]: I1213 13:17:16.384431 3371 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7f310cac-8738-466a-980d-e11c5846c880-config-volume\") pod \"coredns-7db6d8ff4d-6p2ts\" (UID: \"7f310cac-8738-466a-980d-e11c5846c880\") " pod="kube-system/coredns-7db6d8ff4d-6p2ts" Dec 13 13:17:16.384595 kubelet[3371]: I1213 13:17:16.384452 3371 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhtjk\" (UniqueName: \"kubernetes.io/projected/67ea676a-e60f-4a1b-87e6-2b99ebd86137-kube-api-access-mhtjk\") pod \"coredns-7db6d8ff4d-gwmh8\" (UID: \"67ea676a-e60f-4a1b-87e6-2b99ebd86137\") " pod="kube-system/coredns-7db6d8ff4d-gwmh8" Dec 13 13:17:16.384595 kubelet[3371]: I1213 13:17:16.384470 3371 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8l4k6\" (UniqueName: \"kubernetes.io/projected/7f310cac-8738-466a-980d-e11c5846c880-kube-api-access-8l4k6\") pod \"coredns-7db6d8ff4d-6p2ts\" (UID: \"7f310cac-8738-466a-980d-e11c5846c880\") " pod="kube-system/coredns-7db6d8ff4d-6p2ts" Dec 13 13:17:16.591976 containerd[1734]: time="2024-12-13T13:17:16.591366093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-gwmh8,Uid:67ea676a-e60f-4a1b-87e6-2b99ebd86137,Namespace:kube-system,Attempt:0,}" Dec 13 13:17:16.597385 containerd[1734]: time="2024-12-13T13:17:16.597110739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6p2ts,Uid:7f310cac-8738-466a-980d-e11c5846c880,Namespace:kube-system,Attempt:0,}" Dec 13 13:17:17.073201 kubelet[3371]: I1213 13:17:17.072958 3371 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-5b4vd" podStartSLOduration=7.422284148 podStartE2EDuration="17.072941504s" podCreationTimestamp="2024-12-13 13:17:00 +0000 UTC" firstStartedPulling="2024-12-13 13:17:01.066664185 +0000 UTC m=+14.503966582" lastFinishedPulling="2024-12-13 13:17:10.717321581 +0000 UTC m=+24.154623938" observedRunningTime="2024-12-13 13:17:17.072543863 +0000 UTC m=+30.509846260" watchObservedRunningTime="2024-12-13 13:17:17.072941504 +0000 UTC m=+30.510243901" Dec 13 13:17:19.046534 systemd-networkd[1451]: cilium_host: Link UP Dec 13 13:17:19.046669 systemd-networkd[1451]: cilium_net: Link UP Dec 13 13:17:19.050723 systemd-networkd[1451]: cilium_net: Gained carrier Dec 13 13:17:19.050921 systemd-networkd[1451]: cilium_host: Gained carrier Dec 13 13:17:19.051014 systemd-networkd[1451]: cilium_net: Gained IPv6LL Dec 13 13:17:19.051119 systemd-networkd[1451]: cilium_host: Gained IPv6LL Dec 13 13:17:19.212280 systemd-networkd[1451]: cilium_vxlan: Link UP Dec 13 13:17:19.212292 systemd-networkd[1451]: cilium_vxlan: Gained carrier Dec 13 13:17:19.499246 kernel: NET: Registered PF_ALG protocol family Dec 13 13:17:20.230825 systemd-networkd[1451]: lxc_health: Link UP Dec 13 13:17:20.237802 systemd-networkd[1451]: lxc_health: Gained carrier Dec 13 13:17:20.685069 systemd-networkd[1451]: lxcaa2f1b250d7a: Link UP Dec 13 13:17:20.691116 systemd-networkd[1451]: lxc44bdfb133baf: Link UP Dec 13 13:17:20.700557 kernel: eth0: renamed from tmp8a875 Dec 13 13:17:20.713211 kernel: eth0: renamed from tmpc176a Dec 13 13:17:20.721431 systemd-networkd[1451]: lxcaa2f1b250d7a: Gained carrier Dec 13 13:17:20.721651 systemd-networkd[1451]: lxc44bdfb133baf: Gained carrier Dec 13 13:17:20.807285 systemd-networkd[1451]: cilium_vxlan: Gained IPv6LL Dec 13 13:17:21.639298 systemd-networkd[1451]: lxc_health: Gained IPv6LL Dec 13 13:17:22.151267 systemd-networkd[1451]: lxcaa2f1b250d7a: Gained IPv6LL Dec 13 13:17:22.536322 systemd-networkd[1451]: lxc44bdfb133baf: Gained IPv6LL Dec 13 13:17:24.525320 containerd[1734]: time="2024-12-13T13:17:24.518325970Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:17:24.525320 containerd[1734]: time="2024-12-13T13:17:24.518394850Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:17:24.525320 containerd[1734]: time="2024-12-13T13:17:24.518410330Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:17:24.525320 containerd[1734]: time="2024-12-13T13:17:24.518496011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:17:24.529385 containerd[1734]: time="2024-12-13T13:17:24.529114463Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:17:24.529385 containerd[1734]: time="2024-12-13T13:17:24.529219863Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:17:24.529385 containerd[1734]: time="2024-12-13T13:17:24.529236903Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:17:24.529385 containerd[1734]: time="2024-12-13T13:17:24.529336663Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:17:24.570553 systemd[1]: run-containerd-runc-k8s.io-8a875fc9b3300158a23b7e7edc47e099a9bb4f40cec894e09a66ad7c7bed9f4b-runc.KKbgFr.mount: Deactivated successfully. Dec 13 13:17:24.581349 systemd[1]: Started cri-containerd-8a875fc9b3300158a23b7e7edc47e099a9bb4f40cec894e09a66ad7c7bed9f4b.scope - libcontainer container 8a875fc9b3300158a23b7e7edc47e099a9bb4f40cec894e09a66ad7c7bed9f4b. Dec 13 13:17:24.584481 systemd[1]: Started cri-containerd-c176a2f656d03deeaa993fca76ca266475d5586678da82aad63e39715f7aae87.scope - libcontainer container c176a2f656d03deeaa993fca76ca266475d5586678da82aad63e39715f7aae87. Dec 13 13:17:24.629033 containerd[1734]: time="2024-12-13T13:17:24.628983096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-gwmh8,Uid:67ea676a-e60f-4a1b-87e6-2b99ebd86137,Namespace:kube-system,Attempt:0,} returns sandbox id \"8a875fc9b3300158a23b7e7edc47e099a9bb4f40cec894e09a66ad7c7bed9f4b\"" Dec 13 13:17:24.637667 containerd[1734]: time="2024-12-13T13:17:24.637593665Z" level=info msg="CreateContainer within sandbox \"8a875fc9b3300158a23b7e7edc47e099a9bb4f40cec894e09a66ad7c7bed9f4b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 13:17:24.650319 containerd[1734]: time="2024-12-13T13:17:24.650182000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6p2ts,Uid:7f310cac-8738-466a-980d-e11c5846c880,Namespace:kube-system,Attempt:0,} returns sandbox id \"c176a2f656d03deeaa993fca76ca266475d5586678da82aad63e39715f7aae87\"" Dec 13 13:17:24.655395 containerd[1734]: time="2024-12-13T13:17:24.655354165Z" level=info msg="CreateContainer within sandbox \"c176a2f656d03deeaa993fca76ca266475d5586678da82aad63e39715f7aae87\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 13:17:24.687220 containerd[1734]: time="2024-12-13T13:17:24.687178921Z" level=info msg="CreateContainer within sandbox \"8a875fc9b3300158a23b7e7edc47e099a9bb4f40cec894e09a66ad7c7bed9f4b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5fd873533b89a4aeaeb64d3c2456a566470caf33d2e193de5eaf7e2e9fd5e2f5\"" Dec 13 13:17:24.688669 containerd[1734]: time="2024-12-13T13:17:24.687933322Z" level=info msg="StartContainer for \"5fd873533b89a4aeaeb64d3c2456a566470caf33d2e193de5eaf7e2e9fd5e2f5\"" Dec 13 13:17:24.708819 containerd[1734]: time="2024-12-13T13:17:24.708350465Z" level=info msg="CreateContainer within sandbox \"c176a2f656d03deeaa993fca76ca266475d5586678da82aad63e39715f7aae87\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ec760b6b2143019f99fe3d66495384caba7adcdbfaf02b0470404e70904c5577\"" Dec 13 13:17:24.710977 containerd[1734]: time="2024-12-13T13:17:24.710904268Z" level=info msg="StartContainer for \"ec760b6b2143019f99fe3d66495384caba7adcdbfaf02b0470404e70904c5577\"" Dec 13 13:17:24.716511 systemd[1]: Started cri-containerd-5fd873533b89a4aeaeb64d3c2456a566470caf33d2e193de5eaf7e2e9fd5e2f5.scope - libcontainer container 5fd873533b89a4aeaeb64d3c2456a566470caf33d2e193de5eaf7e2e9fd5e2f5. Dec 13 13:17:24.751364 systemd[1]: Started cri-containerd-ec760b6b2143019f99fe3d66495384caba7adcdbfaf02b0470404e70904c5577.scope - libcontainer container ec760b6b2143019f99fe3d66495384caba7adcdbfaf02b0470404e70904c5577. Dec 13 13:17:24.759984 containerd[1734]: time="2024-12-13T13:17:24.759923684Z" level=info msg="StartContainer for \"5fd873533b89a4aeaeb64d3c2456a566470caf33d2e193de5eaf7e2e9fd5e2f5\" returns successfully" Dec 13 13:17:24.796031 containerd[1734]: time="2024-12-13T13:17:24.795860844Z" level=info msg="StartContainer for \"ec760b6b2143019f99fe3d66495384caba7adcdbfaf02b0470404e70904c5577\" returns successfully" Dec 13 13:17:25.075569 kubelet[3371]: I1213 13:17:25.075434 3371 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-6p2ts" podStartSLOduration=25.07541612 podStartE2EDuration="25.07541612s" podCreationTimestamp="2024-12-13 13:17:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:17:25.07462808 +0000 UTC m=+38.511930477" watchObservedRunningTime="2024-12-13 13:17:25.07541612 +0000 UTC m=+38.512718517" Dec 13 13:17:25.093230 kubelet[3371]: I1213 13:17:25.093051 3371 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-gwmh8" podStartSLOduration=25.09303274 podStartE2EDuration="25.09303274s" podCreationTimestamp="2024-12-13 13:17:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:17:25.090609298 +0000 UTC m=+38.527911695" watchObservedRunningTime="2024-12-13 13:17:25.09303274 +0000 UTC m=+38.530335137" Dec 13 13:17:25.534360 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2198525877.mount: Deactivated successfully. Dec 13 13:19:09.980690 systemd[1]: Started sshd@7-10.200.20.34:22-10.200.16.10:58048.service - OpenSSH per-connection server daemon (10.200.16.10:58048). Dec 13 13:19:10.418229 sshd[4777]: Accepted publickey for core from 10.200.16.10 port 58048 ssh2: RSA SHA256:s/ry0hNLnvKqnMQ9cPrjUFS9LNOYotk3LUB29ZrhvrI Dec 13 13:19:10.419620 sshd-session[4777]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:19:10.423566 systemd-logind[1699]: New session 10 of user core. Dec 13 13:19:10.428299 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 13:19:10.834243 sshd[4779]: Connection closed by 10.200.16.10 port 58048 Dec 13 13:19:10.834753 sshd-session[4777]: pam_unix(sshd:session): session closed for user core Dec 13 13:19:10.838218 systemd[1]: sshd@7-10.200.20.34:22-10.200.16.10:58048.service: Deactivated successfully. Dec 13 13:19:10.839912 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 13:19:10.840669 systemd-logind[1699]: Session 10 logged out. Waiting for processes to exit. Dec 13 13:19:10.841566 systemd-logind[1699]: Removed session 10. Dec 13 13:19:15.914452 systemd[1]: Started sshd@8-10.200.20.34:22-10.200.16.10:58056.service - OpenSSH per-connection server daemon (10.200.16.10:58056). Dec 13 13:19:16.330476 sshd[4791]: Accepted publickey for core from 10.200.16.10 port 58056 ssh2: RSA SHA256:s/ry0hNLnvKqnMQ9cPrjUFS9LNOYotk3LUB29ZrhvrI Dec 13 13:19:16.332326 sshd-session[4791]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:19:16.338528 systemd-logind[1699]: New session 11 of user core. Dec 13 13:19:16.344392 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 13:19:16.706556 sshd[4793]: Connection closed by 10.200.16.10 port 58056 Dec 13 13:19:16.707314 sshd-session[4791]: pam_unix(sshd:session): session closed for user core Dec 13 13:19:16.710862 systemd[1]: sshd@8-10.200.20.34:22-10.200.16.10:58056.service: Deactivated successfully. Dec 13 13:19:16.713151 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 13:19:16.713867 systemd-logind[1699]: Session 11 logged out. Waiting for processes to exit. Dec 13 13:19:16.714956 systemd-logind[1699]: Removed session 11. Dec 13 13:19:21.789490 systemd[1]: Started sshd@9-10.200.20.34:22-10.200.16.10:37750.service - OpenSSH per-connection server daemon (10.200.16.10:37750). Dec 13 13:19:22.219363 sshd[4805]: Accepted publickey for core from 10.200.16.10 port 37750 ssh2: RSA SHA256:s/ry0hNLnvKqnMQ9cPrjUFS9LNOYotk3LUB29ZrhvrI Dec 13 13:19:22.220796 sshd-session[4805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:19:22.225052 systemd-logind[1699]: New session 12 of user core. Dec 13 13:19:22.230346 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 13:19:22.606388 sshd[4807]: Connection closed by 10.200.16.10 port 37750 Dec 13 13:19:22.606995 sshd-session[4805]: pam_unix(sshd:session): session closed for user core Dec 13 13:19:22.611212 systemd[1]: sshd@9-10.200.20.34:22-10.200.16.10:37750.service: Deactivated successfully. Dec 13 13:19:22.615571 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 13:19:22.617212 systemd-logind[1699]: Session 12 logged out. Waiting for processes to exit. Dec 13 13:19:22.618786 systemd-logind[1699]: Removed session 12. Dec 13 13:19:27.687439 systemd[1]: Started sshd@10-10.200.20.34:22-10.200.16.10:37754.service - OpenSSH per-connection server daemon (10.200.16.10:37754). Dec 13 13:19:28.108653 sshd[4819]: Accepted publickey for core from 10.200.16.10 port 37754 ssh2: RSA SHA256:s/ry0hNLnvKqnMQ9cPrjUFS9LNOYotk3LUB29ZrhvrI Dec 13 13:19:28.110052 sshd-session[4819]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:19:28.114998 systemd-logind[1699]: New session 13 of user core. Dec 13 13:19:28.125327 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 13:19:28.496254 sshd[4821]: Connection closed by 10.200.16.10 port 37754 Dec 13 13:19:28.497077 sshd-session[4819]: pam_unix(sshd:session): session closed for user core Dec 13 13:19:28.500717 systemd[1]: sshd@10-10.200.20.34:22-10.200.16.10:37754.service: Deactivated successfully. Dec 13 13:19:28.502920 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 13:19:28.504650 systemd-logind[1699]: Session 13 logged out. Waiting for processes to exit. Dec 13 13:19:28.507532 systemd-logind[1699]: Removed session 13. Dec 13 13:19:28.570152 systemd[1]: Started sshd@11-10.200.20.34:22-10.200.16.10:44210.service - OpenSSH per-connection server daemon (10.200.16.10:44210). Dec 13 13:19:28.990099 sshd[4832]: Accepted publickey for core from 10.200.16.10 port 44210 ssh2: RSA SHA256:s/ry0hNLnvKqnMQ9cPrjUFS9LNOYotk3LUB29ZrhvrI Dec 13 13:19:28.991416 sshd-session[4832]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:19:28.996478 systemd-logind[1699]: New session 14 of user core. Dec 13 13:19:29.003326 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 13:19:29.401168 sshd[4834]: Connection closed by 10.200.16.10 port 44210 Dec 13 13:19:29.401897 sshd-session[4832]: pam_unix(sshd:session): session closed for user core Dec 13 13:19:29.406120 systemd[1]: sshd@11-10.200.20.34:22-10.200.16.10:44210.service: Deactivated successfully. Dec 13 13:19:29.408580 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 13:19:29.410193 systemd-logind[1699]: Session 14 logged out. Waiting for processes to exit. Dec 13 13:19:29.412080 systemd-logind[1699]: Removed session 14. Dec 13 13:19:29.483492 systemd[1]: Started sshd@12-10.200.20.34:22-10.200.16.10:44220.service - OpenSSH per-connection server daemon (10.200.16.10:44220). Dec 13 13:19:29.898123 sshd[4842]: Accepted publickey for core from 10.200.16.10 port 44220 ssh2: RSA SHA256:s/ry0hNLnvKqnMQ9cPrjUFS9LNOYotk3LUB29ZrhvrI Dec 13 13:19:29.899742 sshd-session[4842]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:19:29.904783 systemd-logind[1699]: New session 15 of user core. Dec 13 13:19:29.911368 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 13:19:30.271004 sshd[4844]: Connection closed by 10.200.16.10 port 44220 Dec 13 13:19:30.271623 sshd-session[4842]: pam_unix(sshd:session): session closed for user core Dec 13 13:19:30.274937 systemd-logind[1699]: Session 15 logged out. Waiting for processes to exit. Dec 13 13:19:30.275172 systemd[1]: sshd@12-10.200.20.34:22-10.200.16.10:44220.service: Deactivated successfully. Dec 13 13:19:30.277753 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 13:19:30.280720 systemd-logind[1699]: Removed session 15. Dec 13 13:19:35.352464 systemd[1]: Started sshd@13-10.200.20.34:22-10.200.16.10:44228.service - OpenSSH per-connection server daemon (10.200.16.10:44228). Dec 13 13:19:35.779914 sshd[4857]: Accepted publickey for core from 10.200.16.10 port 44228 ssh2: RSA SHA256:s/ry0hNLnvKqnMQ9cPrjUFS9LNOYotk3LUB29ZrhvrI Dec 13 13:19:35.781402 sshd-session[4857]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:19:35.785502 systemd-logind[1699]: New session 16 of user core. Dec 13 13:19:35.794307 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 13:19:36.151557 sshd[4859]: Connection closed by 10.200.16.10 port 44228 Dec 13 13:19:36.152188 sshd-session[4857]: pam_unix(sshd:session): session closed for user core Dec 13 13:19:36.155826 systemd[1]: sshd@13-10.200.20.34:22-10.200.16.10:44228.service: Deactivated successfully. Dec 13 13:19:36.158214 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 13:19:36.159576 systemd-logind[1699]: Session 16 logged out. Waiting for processes to exit. Dec 13 13:19:36.160567 systemd-logind[1699]: Removed session 16. Dec 13 13:19:41.229920 systemd[1]: Started sshd@14-10.200.20.34:22-10.200.16.10:43656.service - OpenSSH per-connection server daemon (10.200.16.10:43656). Dec 13 13:19:41.661798 sshd[4870]: Accepted publickey for core from 10.200.16.10 port 43656 ssh2: RSA SHA256:s/ry0hNLnvKqnMQ9cPrjUFS9LNOYotk3LUB29ZrhvrI Dec 13 13:19:41.663339 sshd-session[4870]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:19:41.668173 systemd-logind[1699]: New session 17 of user core. Dec 13 13:19:41.672356 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 13:19:42.047475 sshd[4872]: Connection closed by 10.200.16.10 port 43656 Dec 13 13:19:42.048079 sshd-session[4870]: pam_unix(sshd:session): session closed for user core Dec 13 13:19:42.052166 systemd[1]: sshd@14-10.200.20.34:22-10.200.16.10:43656.service: Deactivated successfully. Dec 13 13:19:42.054278 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 13:19:42.055202 systemd-logind[1699]: Session 17 logged out. Waiting for processes to exit. Dec 13 13:19:42.056654 systemd-logind[1699]: Removed session 17. Dec 13 13:19:42.127215 systemd[1]: Started sshd@15-10.200.20.34:22-10.200.16.10:43658.service - OpenSSH per-connection server daemon (10.200.16.10:43658). Dec 13 13:19:42.559514 sshd[4884]: Accepted publickey for core from 10.200.16.10 port 43658 ssh2: RSA SHA256:s/ry0hNLnvKqnMQ9cPrjUFS9LNOYotk3LUB29ZrhvrI Dec 13 13:19:42.560923 sshd-session[4884]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:19:42.565811 systemd-logind[1699]: New session 18 of user core. Dec 13 13:19:42.571316 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 13:19:42.994842 sshd[4886]: Connection closed by 10.200.16.10 port 43658 Dec 13 13:19:42.994699 sshd-session[4884]: pam_unix(sshd:session): session closed for user core Dec 13 13:19:42.999449 systemd[1]: sshd@15-10.200.20.34:22-10.200.16.10:43658.service: Deactivated successfully. Dec 13 13:19:43.003085 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 13:19:43.004313 systemd-logind[1699]: Session 18 logged out. Waiting for processes to exit. Dec 13 13:19:43.006038 systemd-logind[1699]: Removed session 18. Dec 13 13:19:43.077499 systemd[1]: Started sshd@16-10.200.20.34:22-10.200.16.10:43668.service - OpenSSH per-connection server daemon (10.200.16.10:43668). Dec 13 13:19:43.511355 sshd[4895]: Accepted publickey for core from 10.200.16.10 port 43668 ssh2: RSA SHA256:s/ry0hNLnvKqnMQ9cPrjUFS9LNOYotk3LUB29ZrhvrI Dec 13 13:19:43.512918 sshd-session[4895]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:19:43.518272 systemd-logind[1699]: New session 19 of user core. Dec 13 13:19:43.528345 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 13:19:45.338208 sshd[4897]: Connection closed by 10.200.16.10 port 43668 Dec 13 13:19:45.338833 sshd-session[4895]: pam_unix(sshd:session): session closed for user core Dec 13 13:19:45.342220 systemd[1]: sshd@16-10.200.20.34:22-10.200.16.10:43668.service: Deactivated successfully. Dec 13 13:19:45.344780 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 13:19:45.348120 systemd-logind[1699]: Session 19 logged out. Waiting for processes to exit. Dec 13 13:19:45.349588 systemd-logind[1699]: Removed session 19. Dec 13 13:19:45.419550 systemd[1]: Started sshd@17-10.200.20.34:22-10.200.16.10:43676.service - OpenSSH per-connection server daemon (10.200.16.10:43676). Dec 13 13:19:45.837219 sshd[4914]: Accepted publickey for core from 10.200.16.10 port 43676 ssh2: RSA SHA256:s/ry0hNLnvKqnMQ9cPrjUFS9LNOYotk3LUB29ZrhvrI Dec 13 13:19:45.840277 sshd-session[4914]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:19:45.847404 systemd-logind[1699]: New session 20 of user core. Dec 13 13:19:45.852763 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 13 13:19:46.337219 sshd[4916]: Connection closed by 10.200.16.10 port 43676 Dec 13 13:19:46.337815 sshd-session[4914]: pam_unix(sshd:session): session closed for user core Dec 13 13:19:46.341731 systemd[1]: sshd@17-10.200.20.34:22-10.200.16.10:43676.service: Deactivated successfully. Dec 13 13:19:46.345942 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 13:19:46.347624 systemd-logind[1699]: Session 20 logged out. Waiting for processes to exit. Dec 13 13:19:46.349377 systemd-logind[1699]: Removed session 20. Dec 13 13:19:46.419638 systemd[1]: Started sshd@18-10.200.20.34:22-10.200.16.10:43692.service - OpenSSH per-connection server daemon (10.200.16.10:43692). Dec 13 13:19:46.841464 sshd[4925]: Accepted publickey for core from 10.200.16.10 port 43692 ssh2: RSA SHA256:s/ry0hNLnvKqnMQ9cPrjUFS9LNOYotk3LUB29ZrhvrI Dec 13 13:19:46.843195 sshd-session[4925]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:19:46.847913 systemd-logind[1699]: New session 21 of user core. Dec 13 13:19:46.855441 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 13 13:19:47.222757 sshd[4927]: Connection closed by 10.200.16.10 port 43692 Dec 13 13:19:47.223375 sshd-session[4925]: pam_unix(sshd:session): session closed for user core Dec 13 13:19:47.226852 systemd[1]: sshd@18-10.200.20.34:22-10.200.16.10:43692.service: Deactivated successfully. Dec 13 13:19:47.229882 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 13:19:47.232726 systemd-logind[1699]: Session 21 logged out. Waiting for processes to exit. Dec 13 13:19:47.234227 systemd-logind[1699]: Removed session 21. Dec 13 13:19:52.305484 systemd[1]: Started sshd@19-10.200.20.34:22-10.200.16.10:58566.service - OpenSSH per-connection server daemon (10.200.16.10:58566). Dec 13 13:19:52.730837 sshd[4943]: Accepted publickey for core from 10.200.16.10 port 58566 ssh2: RSA SHA256:s/ry0hNLnvKqnMQ9cPrjUFS9LNOYotk3LUB29ZrhvrI Dec 13 13:19:52.732396 sshd-session[4943]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:19:52.737524 systemd-logind[1699]: New session 22 of user core. Dec 13 13:19:52.743318 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 13 13:19:53.097250 sshd[4945]: Connection closed by 10.200.16.10 port 58566 Dec 13 13:19:53.098034 sshd-session[4943]: pam_unix(sshd:session): session closed for user core Dec 13 13:19:53.102597 systemd-logind[1699]: Session 22 logged out. Waiting for processes to exit. Dec 13 13:19:53.102930 systemd[1]: sshd@19-10.200.20.34:22-10.200.16.10:58566.service: Deactivated successfully. Dec 13 13:19:53.105123 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 13:19:53.106252 systemd-logind[1699]: Removed session 22. Dec 13 13:19:58.178458 systemd[1]: Started sshd@20-10.200.20.34:22-10.200.16.10:58582.service - OpenSSH per-connection server daemon (10.200.16.10:58582). Dec 13 13:19:58.613021 sshd[4956]: Accepted publickey for core from 10.200.16.10 port 58582 ssh2: RSA SHA256:s/ry0hNLnvKqnMQ9cPrjUFS9LNOYotk3LUB29ZrhvrI Dec 13 13:19:58.614441 sshd-session[4956]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:19:58.618641 systemd-logind[1699]: New session 23 of user core. Dec 13 13:19:58.628343 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 13 13:19:58.999685 sshd[4958]: Connection closed by 10.200.16.10 port 58582 Dec 13 13:19:59.000667 sshd-session[4956]: pam_unix(sshd:session): session closed for user core Dec 13 13:19:59.004113 systemd[1]: sshd@20-10.200.20.34:22-10.200.16.10:58582.service: Deactivated successfully. Dec 13 13:19:59.004114 systemd-logind[1699]: Session 23 logged out. Waiting for processes to exit. Dec 13 13:19:59.007510 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 13:19:59.010139 systemd-logind[1699]: Removed session 23. Dec 13 13:20:04.079461 systemd[1]: Started sshd@21-10.200.20.34:22-10.200.16.10:51536.service - OpenSSH per-connection server daemon (10.200.16.10:51536). Dec 13 13:20:04.493048 sshd[4971]: Accepted publickey for core from 10.200.16.10 port 51536 ssh2: RSA SHA256:s/ry0hNLnvKqnMQ9cPrjUFS9LNOYotk3LUB29ZrhvrI Dec 13 13:20:04.494409 sshd-session[4971]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:20:04.499302 systemd-logind[1699]: New session 24 of user core. Dec 13 13:20:04.505281 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 13 13:20:04.867056 sshd[4974]: Connection closed by 10.200.16.10 port 51536 Dec 13 13:20:04.866233 sshd-session[4971]: pam_unix(sshd:session): session closed for user core Dec 13 13:20:04.869956 systemd[1]: sshd@21-10.200.20.34:22-10.200.16.10:51536.service: Deactivated successfully. Dec 13 13:20:04.870001 systemd-logind[1699]: Session 24 logged out. Waiting for processes to exit. Dec 13 13:20:04.874081 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 13:20:04.876981 systemd-logind[1699]: Removed session 24. Dec 13 13:20:04.953480 systemd[1]: Started sshd@22-10.200.20.34:22-10.200.16.10:51542.service - OpenSSH per-connection server daemon (10.200.16.10:51542). Dec 13 13:20:05.385758 sshd[4985]: Accepted publickey for core from 10.200.16.10 port 51542 ssh2: RSA SHA256:s/ry0hNLnvKqnMQ9cPrjUFS9LNOYotk3LUB29ZrhvrI Dec 13 13:20:05.387242 sshd-session[4985]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:20:05.392443 systemd-logind[1699]: New session 25 of user core. Dec 13 13:20:05.396574 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 13 13:20:07.338053 containerd[1734]: time="2024-12-13T13:20:07.336468998Z" level=info msg="StopContainer for \"dc6825393fcf4bdaf8ad72acbaa9d60d7f008247900d872ecb0c909d2e79b7f4\" with timeout 30 (s)" Dec 13 13:20:07.341422 containerd[1734]: time="2024-12-13T13:20:07.340443567Z" level=info msg="Stop container \"dc6825393fcf4bdaf8ad72acbaa9d60d7f008247900d872ecb0c909d2e79b7f4\" with signal terminated" Dec 13 13:20:07.353510 containerd[1734]: time="2024-12-13T13:20:07.353467398Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 13:20:07.367413 systemd[1]: cri-containerd-dc6825393fcf4bdaf8ad72acbaa9d60d7f008247900d872ecb0c909d2e79b7f4.scope: Deactivated successfully. Dec 13 13:20:07.372516 containerd[1734]: time="2024-12-13T13:20:07.372480203Z" level=info msg="StopContainer for \"99feda62a869506fc48b24dcb9888912a46943a65744ea78a1d3f39783a0d6d1\" with timeout 2 (s)" Dec 13 13:20:07.373302 containerd[1734]: time="2024-12-13T13:20:07.373278205Z" level=info msg="Stop container \"99feda62a869506fc48b24dcb9888912a46943a65744ea78a1d3f39783a0d6d1\" with signal terminated" Dec 13 13:20:07.386354 systemd-networkd[1451]: lxc_health: Link DOWN Dec 13 13:20:07.386364 systemd-networkd[1451]: lxc_health: Lost carrier Dec 13 13:20:07.402829 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dc6825393fcf4bdaf8ad72acbaa9d60d7f008247900d872ecb0c909d2e79b7f4-rootfs.mount: Deactivated successfully. Dec 13 13:20:07.408103 systemd[1]: cri-containerd-99feda62a869506fc48b24dcb9888912a46943a65744ea78a1d3f39783a0d6d1.scope: Deactivated successfully. Dec 13 13:20:07.408870 systemd[1]: cri-containerd-99feda62a869506fc48b24dcb9888912a46943a65744ea78a1d3f39783a0d6d1.scope: Consumed 6.590s CPU time. Dec 13 13:20:07.430831 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-99feda62a869506fc48b24dcb9888912a46943a65744ea78a1d3f39783a0d6d1-rootfs.mount: Deactivated successfully. Dec 13 13:20:07.438486 containerd[1734]: time="2024-12-13T13:20:07.438356878Z" level=info msg="shim disconnected" id=dc6825393fcf4bdaf8ad72acbaa9d60d7f008247900d872ecb0c909d2e79b7f4 namespace=k8s.io Dec 13 13:20:07.438486 containerd[1734]: time="2024-12-13T13:20:07.438468279Z" level=warning msg="cleaning up after shim disconnected" id=dc6825393fcf4bdaf8ad72acbaa9d60d7f008247900d872ecb0c909d2e79b7f4 namespace=k8s.io Dec 13 13:20:07.438486 containerd[1734]: time="2024-12-13T13:20:07.438479039Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:20:07.440714 containerd[1734]: time="2024-12-13T13:20:07.440499883Z" level=info msg="shim disconnected" id=99feda62a869506fc48b24dcb9888912a46943a65744ea78a1d3f39783a0d6d1 namespace=k8s.io Dec 13 13:20:07.440714 containerd[1734]: time="2024-12-13T13:20:07.440610964Z" level=warning msg="cleaning up after shim disconnected" id=99feda62a869506fc48b24dcb9888912a46943a65744ea78a1d3f39783a0d6d1 namespace=k8s.io Dec 13 13:20:07.440714 containerd[1734]: time="2024-12-13T13:20:07.440620684Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:20:07.471379 containerd[1734]: time="2024-12-13T13:20:07.471106476Z" level=info msg="StopContainer for \"dc6825393fcf4bdaf8ad72acbaa9d60d7f008247900d872ecb0c909d2e79b7f4\" returns successfully" Dec 13 13:20:07.472076 containerd[1734]: time="2024-12-13T13:20:07.471832717Z" level=info msg="StopPodSandbox for \"2c9ceb3f1bd6d53cee22fce30be309e3697d48140896c04e273d6e3fa3c39006\"" Dec 13 13:20:07.472076 containerd[1734]: time="2024-12-13T13:20:07.471878437Z" level=info msg="Container to stop \"dc6825393fcf4bdaf8ad72acbaa9d60d7f008247900d872ecb0c909d2e79b7f4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 13:20:07.473861 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2c9ceb3f1bd6d53cee22fce30be309e3697d48140896c04e273d6e3fa3c39006-shm.mount: Deactivated successfully. Dec 13 13:20:07.480101 containerd[1734]: time="2024-12-13T13:20:07.479722016Z" level=info msg="StopContainer for \"99feda62a869506fc48b24dcb9888912a46943a65744ea78a1d3f39783a0d6d1\" returns successfully" Dec 13 13:20:07.480802 containerd[1734]: time="2024-12-13T13:20:07.480458778Z" level=info msg="StopPodSandbox for \"faf95902c8864b2309bcca06f013853c2a54e69b81d17f91ffe4c964e7973e75\"" Dec 13 13:20:07.480802 containerd[1734]: time="2024-12-13T13:20:07.480502738Z" level=info msg="Container to stop \"99feda62a869506fc48b24dcb9888912a46943a65744ea78a1d3f39783a0d6d1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 13:20:07.480802 containerd[1734]: time="2024-12-13T13:20:07.480515458Z" level=info msg="Container to stop \"0ee72be1d968b1d144dec1143467b199b1c372090842dd0fc191f349a6efad36\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 13:20:07.480802 containerd[1734]: time="2024-12-13T13:20:07.480523698Z" level=info msg="Container to stop \"6f115bc0544ae7d2ab02bee88d5e4b8d8b6ee812eb2dcff8e1dd54971807b4a3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 13:20:07.480802 containerd[1734]: time="2024-12-13T13:20:07.480532138Z" level=info msg="Container to stop \"791baa6075bf070d06b7f3c8c558daa2bc98d7fdef628894a2c63ca5a06e8d61\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 13:20:07.480802 containerd[1734]: time="2024-12-13T13:20:07.480542938Z" level=info msg="Container to stop \"8a80b64fcd5d272d222210400e5e9bf35b082fcf8fa80eba5259d6e1dcad39d4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 13:20:07.482482 systemd[1]: cri-containerd-2c9ceb3f1bd6d53cee22fce30be309e3697d48140896c04e273d6e3fa3c39006.scope: Deactivated successfully. Dec 13 13:20:07.491693 systemd[1]: cri-containerd-faf95902c8864b2309bcca06f013853c2a54e69b81d17f91ffe4c964e7973e75.scope: Deactivated successfully. Dec 13 13:20:07.522557 containerd[1734]: time="2024-12-13T13:20:07.522147036Z" level=info msg="shim disconnected" id=faf95902c8864b2309bcca06f013853c2a54e69b81d17f91ffe4c964e7973e75 namespace=k8s.io Dec 13 13:20:07.522766 containerd[1734]: time="2024-12-13T13:20:07.522562717Z" level=warning msg="cleaning up after shim disconnected" id=faf95902c8864b2309bcca06f013853c2a54e69b81d17f91ffe4c964e7973e75 namespace=k8s.io Dec 13 13:20:07.522766 containerd[1734]: time="2024-12-13T13:20:07.522577677Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:20:07.523767 containerd[1734]: time="2024-12-13T13:20:07.522327797Z" level=info msg="shim disconnected" id=2c9ceb3f1bd6d53cee22fce30be309e3697d48140896c04e273d6e3fa3c39006 namespace=k8s.io Dec 13 13:20:07.523767 containerd[1734]: time="2024-12-13T13:20:07.523171279Z" level=warning msg="cleaning up after shim disconnected" id=2c9ceb3f1bd6d53cee22fce30be309e3697d48140896c04e273d6e3fa3c39006 namespace=k8s.io Dec 13 13:20:07.523767 containerd[1734]: time="2024-12-13T13:20:07.523181319Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:20:07.537505 containerd[1734]: time="2024-12-13T13:20:07.537450152Z" level=warning msg="cleanup warnings time=\"2024-12-13T13:20:07Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 13:20:07.539488 containerd[1734]: time="2024-12-13T13:20:07.539435917Z" level=info msg="TearDown network for sandbox \"faf95902c8864b2309bcca06f013853c2a54e69b81d17f91ffe4c964e7973e75\" successfully" Dec 13 13:20:07.539488 containerd[1734]: time="2024-12-13T13:20:07.539492917Z" level=info msg="StopPodSandbox for \"faf95902c8864b2309bcca06f013853c2a54e69b81d17f91ffe4c964e7973e75\" returns successfully" Dec 13 13:20:07.541178 containerd[1734]: time="2024-12-13T13:20:07.541143361Z" level=info msg="TearDown network for sandbox \"2c9ceb3f1bd6d53cee22fce30be309e3697d48140896c04e273d6e3fa3c39006\" successfully" Dec 13 13:20:07.541390 containerd[1734]: time="2024-12-13T13:20:07.541291721Z" level=info msg="StopPodSandbox for \"2c9ceb3f1bd6d53cee22fce30be309e3697d48140896c04e273d6e3fa3c39006\" returns successfully" Dec 13 13:20:07.691535 kubelet[3371]: I1213 13:20:07.690943 3371 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8586645d-c092-47bf-a9c8-56dd9e82e2c7-hubble-tls\") pod \"8586645d-c092-47bf-a9c8-56dd9e82e2c7\" (UID: \"8586645d-c092-47bf-a9c8-56dd9e82e2c7\") " Dec 13 13:20:07.691535 kubelet[3371]: I1213 13:20:07.690997 3371 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8586645d-c092-47bf-a9c8-56dd9e82e2c7-cilium-cgroup\") pod \"8586645d-c092-47bf-a9c8-56dd9e82e2c7\" (UID: \"8586645d-c092-47bf-a9c8-56dd9e82e2c7\") " Dec 13 13:20:07.691535 kubelet[3371]: I1213 13:20:07.691016 3371 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8586645d-c092-47bf-a9c8-56dd9e82e2c7-etc-cni-netd\") pod \"8586645d-c092-47bf-a9c8-56dd9e82e2c7\" (UID: \"8586645d-c092-47bf-a9c8-56dd9e82e2c7\") " Dec 13 13:20:07.691535 kubelet[3371]: I1213 13:20:07.691031 3371 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8586645d-c092-47bf-a9c8-56dd9e82e2c7-bpf-maps\") pod \"8586645d-c092-47bf-a9c8-56dd9e82e2c7\" (UID: \"8586645d-c092-47bf-a9c8-56dd9e82e2c7\") " Dec 13 13:20:07.691535 kubelet[3371]: I1213 13:20:07.691049 3371 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8586645d-c092-47bf-a9c8-56dd9e82e2c7-cni-path\") pod \"8586645d-c092-47bf-a9c8-56dd9e82e2c7\" (UID: \"8586645d-c092-47bf-a9c8-56dd9e82e2c7\") " Dec 13 13:20:07.691535 kubelet[3371]: I1213 13:20:07.691066 3371 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bfdc435f-db7f-4c33-ae6b-9b68ec6f47be-cilium-config-path\") pod \"bfdc435f-db7f-4c33-ae6b-9b68ec6f47be\" (UID: \"bfdc435f-db7f-4c33-ae6b-9b68ec6f47be\") " Dec 13 13:20:07.692019 kubelet[3371]: I1213 13:20:07.691082 3371 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8586645d-c092-47bf-a9c8-56dd9e82e2c7-host-proc-sys-kernel\") pod \"8586645d-c092-47bf-a9c8-56dd9e82e2c7\" (UID: \"8586645d-c092-47bf-a9c8-56dd9e82e2c7\") " Dec 13 13:20:07.692019 kubelet[3371]: I1213 13:20:07.691100 3371 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8586645d-c092-47bf-a9c8-56dd9e82e2c7-xtables-lock\") pod \"8586645d-c092-47bf-a9c8-56dd9e82e2c7\" (UID: \"8586645d-c092-47bf-a9c8-56dd9e82e2c7\") " Dec 13 13:20:07.692019 kubelet[3371]: I1213 13:20:07.691116 3371 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8586645d-c092-47bf-a9c8-56dd9e82e2c7-hostproc\") pod \"8586645d-c092-47bf-a9c8-56dd9e82e2c7\" (UID: \"8586645d-c092-47bf-a9c8-56dd9e82e2c7\") " Dec 13 13:20:07.692019 kubelet[3371]: I1213 13:20:07.691154 3371 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8xr8j\" (UniqueName: \"kubernetes.io/projected/8586645d-c092-47bf-a9c8-56dd9e82e2c7-kube-api-access-8xr8j\") pod \"8586645d-c092-47bf-a9c8-56dd9e82e2c7\" (UID: \"8586645d-c092-47bf-a9c8-56dd9e82e2c7\") " Dec 13 13:20:07.692019 kubelet[3371]: I1213 13:20:07.691169 3371 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8586645d-c092-47bf-a9c8-56dd9e82e2c7-cilium-run\") pod \"8586645d-c092-47bf-a9c8-56dd9e82e2c7\" (UID: \"8586645d-c092-47bf-a9c8-56dd9e82e2c7\") " Dec 13 13:20:07.692019 kubelet[3371]: I1213 13:20:07.691183 3371 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8586645d-c092-47bf-a9c8-56dd9e82e2c7-host-proc-sys-net\") pod \"8586645d-c092-47bf-a9c8-56dd9e82e2c7\" (UID: \"8586645d-c092-47bf-a9c8-56dd9e82e2c7\") " Dec 13 13:20:07.692176 kubelet[3371]: I1213 13:20:07.691202 3371 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8586645d-c092-47bf-a9c8-56dd9e82e2c7-clustermesh-secrets\") pod \"8586645d-c092-47bf-a9c8-56dd9e82e2c7\" (UID: \"8586645d-c092-47bf-a9c8-56dd9e82e2c7\") " Dec 13 13:20:07.692176 kubelet[3371]: I1213 13:20:07.691220 3371 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8586645d-c092-47bf-a9c8-56dd9e82e2c7-cilium-config-path\") pod \"8586645d-c092-47bf-a9c8-56dd9e82e2c7\" (UID: \"8586645d-c092-47bf-a9c8-56dd9e82e2c7\") " Dec 13 13:20:07.692176 kubelet[3371]: I1213 13:20:07.691238 3371 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8586645d-c092-47bf-a9c8-56dd9e82e2c7-lib-modules\") pod \"8586645d-c092-47bf-a9c8-56dd9e82e2c7\" (UID: \"8586645d-c092-47bf-a9c8-56dd9e82e2c7\") " Dec 13 13:20:07.692176 kubelet[3371]: I1213 13:20:07.691257 3371 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbfwp\" (UniqueName: \"kubernetes.io/projected/bfdc435f-db7f-4c33-ae6b-9b68ec6f47be-kube-api-access-xbfwp\") pod \"bfdc435f-db7f-4c33-ae6b-9b68ec6f47be\" (UID: \"bfdc435f-db7f-4c33-ae6b-9b68ec6f47be\") " Dec 13 13:20:07.693558 kubelet[3371]: I1213 13:20:07.693369 3371 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8586645d-c092-47bf-a9c8-56dd9e82e2c7-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "8586645d-c092-47bf-a9c8-56dd9e82e2c7" (UID: "8586645d-c092-47bf-a9c8-56dd9e82e2c7"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 13:20:07.693558 kubelet[3371]: I1213 13:20:07.693453 3371 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8586645d-c092-47bf-a9c8-56dd9e82e2c7-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "8586645d-c092-47bf-a9c8-56dd9e82e2c7" (UID: "8586645d-c092-47bf-a9c8-56dd9e82e2c7"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 13:20:07.693558 kubelet[3371]: I1213 13:20:07.693473 3371 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8586645d-c092-47bf-a9c8-56dd9e82e2c7-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "8586645d-c092-47bf-a9c8-56dd9e82e2c7" (UID: "8586645d-c092-47bf-a9c8-56dd9e82e2c7"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 13:20:07.693558 kubelet[3371]: I1213 13:20:07.693488 3371 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8586645d-c092-47bf-a9c8-56dd9e82e2c7-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "8586645d-c092-47bf-a9c8-56dd9e82e2c7" (UID: "8586645d-c092-47bf-a9c8-56dd9e82e2c7"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 13:20:07.693558 kubelet[3371]: I1213 13:20:07.693503 3371 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8586645d-c092-47bf-a9c8-56dd9e82e2c7-cni-path" (OuterVolumeSpecName: "cni-path") pod "8586645d-c092-47bf-a9c8-56dd9e82e2c7" (UID: "8586645d-c092-47bf-a9c8-56dd9e82e2c7"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 13:20:07.694441 kubelet[3371]: I1213 13:20:07.694225 3371 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8586645d-c092-47bf-a9c8-56dd9e82e2c7-hostproc" (OuterVolumeSpecName: "hostproc") pod "8586645d-c092-47bf-a9c8-56dd9e82e2c7" (UID: "8586645d-c092-47bf-a9c8-56dd9e82e2c7"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 13:20:07.694879 kubelet[3371]: I1213 13:20:07.694811 3371 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8586645d-c092-47bf-a9c8-56dd9e82e2c7-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "8586645d-c092-47bf-a9c8-56dd9e82e2c7" (UID: "8586645d-c092-47bf-a9c8-56dd9e82e2c7"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 13:20:07.696317 kubelet[3371]: I1213 13:20:07.696178 3371 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8586645d-c092-47bf-a9c8-56dd9e82e2c7-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "8586645d-c092-47bf-a9c8-56dd9e82e2c7" (UID: "8586645d-c092-47bf-a9c8-56dd9e82e2c7"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 13:20:07.696317 kubelet[3371]: I1213 13:20:07.696236 3371 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8586645d-c092-47bf-a9c8-56dd9e82e2c7-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "8586645d-c092-47bf-a9c8-56dd9e82e2c7" (UID: "8586645d-c092-47bf-a9c8-56dd9e82e2c7"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 13:20:07.696710 kubelet[3371]: I1213 13:20:07.696574 3371 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8586645d-c092-47bf-a9c8-56dd9e82e2c7-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "8586645d-c092-47bf-a9c8-56dd9e82e2c7" (UID: "8586645d-c092-47bf-a9c8-56dd9e82e2c7"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 13:20:07.696710 kubelet[3371]: I1213 13:20:07.696666 3371 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8586645d-c092-47bf-a9c8-56dd9e82e2c7-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "8586645d-c092-47bf-a9c8-56dd9e82e2c7" (UID: "8586645d-c092-47bf-a9c8-56dd9e82e2c7"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 13:20:07.697013 kubelet[3371]: I1213 13:20:07.696951 3371 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfdc435f-db7f-4c33-ae6b-9b68ec6f47be-kube-api-access-xbfwp" (OuterVolumeSpecName: "kube-api-access-xbfwp") pod "bfdc435f-db7f-4c33-ae6b-9b68ec6f47be" (UID: "bfdc435f-db7f-4c33-ae6b-9b68ec6f47be"). InnerVolumeSpecName "kube-api-access-xbfwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 13:20:07.699390 kubelet[3371]: I1213 13:20:07.699283 3371 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8586645d-c092-47bf-a9c8-56dd9e82e2c7-kube-api-access-8xr8j" (OuterVolumeSpecName: "kube-api-access-8xr8j") pod "8586645d-c092-47bf-a9c8-56dd9e82e2c7" (UID: "8586645d-c092-47bf-a9c8-56dd9e82e2c7"). InnerVolumeSpecName "kube-api-access-8xr8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 13:20:07.699390 kubelet[3371]: I1213 13:20:07.699282 3371 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8586645d-c092-47bf-a9c8-56dd9e82e2c7-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "8586645d-c092-47bf-a9c8-56dd9e82e2c7" (UID: "8586645d-c092-47bf-a9c8-56dd9e82e2c7"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 13:20:07.699818 kubelet[3371]: I1213 13:20:07.699779 3371 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bfdc435f-db7f-4c33-ae6b-9b68ec6f47be-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "bfdc435f-db7f-4c33-ae6b-9b68ec6f47be" (UID: "bfdc435f-db7f-4c33-ae6b-9b68ec6f47be"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 13:20:07.700108 kubelet[3371]: I1213 13:20:07.700082 3371 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8586645d-c092-47bf-a9c8-56dd9e82e2c7-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8586645d-c092-47bf-a9c8-56dd9e82e2c7" (UID: "8586645d-c092-47bf-a9c8-56dd9e82e2c7"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 13:20:07.791779 kubelet[3371]: I1213 13:20:07.791573 3371 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-xbfwp\" (UniqueName: \"kubernetes.io/projected/bfdc435f-db7f-4c33-ae6b-9b68ec6f47be-kube-api-access-xbfwp\") on node \"ci-4186.0.0-a-128d80e197\" DevicePath \"\"" Dec 13 13:20:07.791779 kubelet[3371]: I1213 13:20:07.791606 3371 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8586645d-c092-47bf-a9c8-56dd9e82e2c7-lib-modules\") on node \"ci-4186.0.0-a-128d80e197\" DevicePath \"\"" Dec 13 13:20:07.791779 kubelet[3371]: I1213 13:20:07.791616 3371 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8586645d-c092-47bf-a9c8-56dd9e82e2c7-cilium-cgroup\") on node \"ci-4186.0.0-a-128d80e197\" DevicePath \"\"" Dec 13 13:20:07.791779 kubelet[3371]: I1213 13:20:07.791627 3371 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8586645d-c092-47bf-a9c8-56dd9e82e2c7-hubble-tls\") on node \"ci-4186.0.0-a-128d80e197\" DevicePath \"\"" Dec 13 13:20:07.791779 kubelet[3371]: I1213 13:20:07.791635 3371 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8586645d-c092-47bf-a9c8-56dd9e82e2c7-cni-path\") on node \"ci-4186.0.0-a-128d80e197\" DevicePath \"\"" Dec 13 13:20:07.791779 kubelet[3371]: I1213 13:20:07.791668 3371 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bfdc435f-db7f-4c33-ae6b-9b68ec6f47be-cilium-config-path\") on node \"ci-4186.0.0-a-128d80e197\" DevicePath \"\"" Dec 13 13:20:07.791779 kubelet[3371]: I1213 13:20:07.791677 3371 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8586645d-c092-47bf-a9c8-56dd9e82e2c7-etc-cni-netd\") on node \"ci-4186.0.0-a-128d80e197\" DevicePath \"\"" Dec 13 13:20:07.791779 kubelet[3371]: I1213 13:20:07.791686 3371 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8586645d-c092-47bf-a9c8-56dd9e82e2c7-bpf-maps\") on node \"ci-4186.0.0-a-128d80e197\" DevicePath \"\"" Dec 13 13:20:07.792071 kubelet[3371]: I1213 13:20:07.791696 3371 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8586645d-c092-47bf-a9c8-56dd9e82e2c7-host-proc-sys-kernel\") on node \"ci-4186.0.0-a-128d80e197\" DevicePath \"\"" Dec 13 13:20:07.792071 kubelet[3371]: I1213 13:20:07.791704 3371 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8586645d-c092-47bf-a9c8-56dd9e82e2c7-xtables-lock\") on node \"ci-4186.0.0-a-128d80e197\" DevicePath \"\"" Dec 13 13:20:07.792071 kubelet[3371]: I1213 13:20:07.791712 3371 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8586645d-c092-47bf-a9c8-56dd9e82e2c7-hostproc\") on node \"ci-4186.0.0-a-128d80e197\" DevicePath \"\"" Dec 13 13:20:07.792071 kubelet[3371]: I1213 13:20:07.791721 3371 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-8xr8j\" (UniqueName: \"kubernetes.io/projected/8586645d-c092-47bf-a9c8-56dd9e82e2c7-kube-api-access-8xr8j\") on node \"ci-4186.0.0-a-128d80e197\" DevicePath \"\"" Dec 13 13:20:07.792071 kubelet[3371]: I1213 13:20:07.791734 3371 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8586645d-c092-47bf-a9c8-56dd9e82e2c7-cilium-run\") on node \"ci-4186.0.0-a-128d80e197\" DevicePath \"\"" Dec 13 13:20:07.792071 kubelet[3371]: I1213 13:20:07.791744 3371 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8586645d-c092-47bf-a9c8-56dd9e82e2c7-host-proc-sys-net\") on node \"ci-4186.0.0-a-128d80e197\" DevicePath \"\"" Dec 13 13:20:07.792071 kubelet[3371]: I1213 13:20:07.791752 3371 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8586645d-c092-47bf-a9c8-56dd9e82e2c7-cilium-config-path\") on node \"ci-4186.0.0-a-128d80e197\" DevicePath \"\"" Dec 13 13:20:07.792071 kubelet[3371]: I1213 13:20:07.791760 3371 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8586645d-c092-47bf-a9c8-56dd9e82e2c7-clustermesh-secrets\") on node \"ci-4186.0.0-a-128d80e197\" DevicePath \"\"" Dec 13 13:20:08.330499 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2c9ceb3f1bd6d53cee22fce30be309e3697d48140896c04e273d6e3fa3c39006-rootfs.mount: Deactivated successfully. Dec 13 13:20:08.330593 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-faf95902c8864b2309bcca06f013853c2a54e69b81d17f91ffe4c964e7973e75-rootfs.mount: Deactivated successfully. Dec 13 13:20:08.330653 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-faf95902c8864b2309bcca06f013853c2a54e69b81d17f91ffe4c964e7973e75-shm.mount: Deactivated successfully. Dec 13 13:20:08.330705 systemd[1]: var-lib-kubelet-pods-bfdc435f\x2ddb7f\x2d4c33\x2dae6b\x2d9b68ec6f47be-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxbfwp.mount: Deactivated successfully. Dec 13 13:20:08.330756 systemd[1]: var-lib-kubelet-pods-8586645d\x2dc092\x2d47bf\x2da9c8\x2d56dd9e82e2c7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8xr8j.mount: Deactivated successfully. Dec 13 13:20:08.330803 systemd[1]: var-lib-kubelet-pods-8586645d\x2dc092\x2d47bf\x2da9c8\x2d56dd9e82e2c7-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 13:20:08.330852 systemd[1]: var-lib-kubelet-pods-8586645d\x2dc092\x2d47bf\x2da9c8\x2d56dd9e82e2c7-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 13:20:08.384704 kubelet[3371]: I1213 13:20:08.384664 3371 scope.go:117] "RemoveContainer" containerID="99feda62a869506fc48b24dcb9888912a46943a65744ea78a1d3f39783a0d6d1" Dec 13 13:20:08.388781 containerd[1734]: time="2024-12-13T13:20:08.388683842Z" level=info msg="RemoveContainer for \"99feda62a869506fc48b24dcb9888912a46943a65744ea78a1d3f39783a0d6d1\"" Dec 13 13:20:08.392988 systemd[1]: Removed slice kubepods-burstable-pod8586645d_c092_47bf_a9c8_56dd9e82e2c7.slice - libcontainer container kubepods-burstable-pod8586645d_c092_47bf_a9c8_56dd9e82e2c7.slice. Dec 13 13:20:08.393100 systemd[1]: kubepods-burstable-pod8586645d_c092_47bf_a9c8_56dd9e82e2c7.slice: Consumed 6.662s CPU time. Dec 13 13:20:08.398692 systemd[1]: Removed slice kubepods-besteffort-podbfdc435f_db7f_4c33_ae6b_9b68ec6f47be.slice - libcontainer container kubepods-besteffort-podbfdc435f_db7f_4c33_ae6b_9b68ec6f47be.slice. Dec 13 13:20:08.401215 containerd[1734]: time="2024-12-13T13:20:08.400551790Z" level=info msg="RemoveContainer for \"99feda62a869506fc48b24dcb9888912a46943a65744ea78a1d3f39783a0d6d1\" returns successfully" Dec 13 13:20:08.401778 kubelet[3371]: I1213 13:20:08.401584 3371 scope.go:117] "RemoveContainer" containerID="8a80b64fcd5d272d222210400e5e9bf35b082fcf8fa80eba5259d6e1dcad39d4" Dec 13 13:20:08.403951 containerd[1734]: time="2024-12-13T13:20:08.403827238Z" level=info msg="RemoveContainer for \"8a80b64fcd5d272d222210400e5e9bf35b082fcf8fa80eba5259d6e1dcad39d4\"" Dec 13 13:20:08.416841 containerd[1734]: time="2024-12-13T13:20:08.416739588Z" level=info msg="RemoveContainer for \"8a80b64fcd5d272d222210400e5e9bf35b082fcf8fa80eba5259d6e1dcad39d4\" returns successfully" Dec 13 13:20:08.417077 kubelet[3371]: I1213 13:20:08.417003 3371 scope.go:117] "RemoveContainer" containerID="791baa6075bf070d06b7f3c8c558daa2bc98d7fdef628894a2c63ca5a06e8d61" Dec 13 13:20:08.418530 containerd[1734]: time="2024-12-13T13:20:08.418238872Z" level=info msg="RemoveContainer for \"791baa6075bf070d06b7f3c8c558daa2bc98d7fdef628894a2c63ca5a06e8d61\"" Dec 13 13:20:08.428759 containerd[1734]: time="2024-12-13T13:20:08.428704177Z" level=info msg="RemoveContainer for \"791baa6075bf070d06b7f3c8c558daa2bc98d7fdef628894a2c63ca5a06e8d61\" returns successfully" Dec 13 13:20:08.429022 kubelet[3371]: I1213 13:20:08.428984 3371 scope.go:117] "RemoveContainer" containerID="6f115bc0544ae7d2ab02bee88d5e4b8d8b6ee812eb2dcff8e1dd54971807b4a3" Dec 13 13:20:08.430496 containerd[1734]: time="2024-12-13T13:20:08.430452221Z" level=info msg="RemoveContainer for \"6f115bc0544ae7d2ab02bee88d5e4b8d8b6ee812eb2dcff8e1dd54971807b4a3\"" Dec 13 13:20:08.442980 containerd[1734]: time="2024-12-13T13:20:08.442898370Z" level=info msg="RemoveContainer for \"6f115bc0544ae7d2ab02bee88d5e4b8d8b6ee812eb2dcff8e1dd54971807b4a3\" returns successfully" Dec 13 13:20:08.443273 kubelet[3371]: I1213 13:20:08.443198 3371 scope.go:117] "RemoveContainer" containerID="0ee72be1d968b1d144dec1143467b199b1c372090842dd0fc191f349a6efad36" Dec 13 13:20:08.446575 containerd[1734]: time="2024-12-13T13:20:08.446526979Z" level=info msg="RemoveContainer for \"0ee72be1d968b1d144dec1143467b199b1c372090842dd0fc191f349a6efad36\"" Dec 13 13:20:08.455049 containerd[1734]: time="2024-12-13T13:20:08.455001079Z" level=info msg="RemoveContainer for \"0ee72be1d968b1d144dec1143467b199b1c372090842dd0fc191f349a6efad36\" returns successfully" Dec 13 13:20:08.455455 kubelet[3371]: I1213 13:20:08.455326 3371 scope.go:117] "RemoveContainer" containerID="99feda62a869506fc48b24dcb9888912a46943a65744ea78a1d3f39783a0d6d1" Dec 13 13:20:08.455736 containerd[1734]: time="2024-12-13T13:20:08.455681040Z" level=error msg="ContainerStatus for \"99feda62a869506fc48b24dcb9888912a46943a65744ea78a1d3f39783a0d6d1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"99feda62a869506fc48b24dcb9888912a46943a65744ea78a1d3f39783a0d6d1\": not found" Dec 13 13:20:08.455890 kubelet[3371]: E1213 13:20:08.455861 3371 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"99feda62a869506fc48b24dcb9888912a46943a65744ea78a1d3f39783a0d6d1\": not found" containerID="99feda62a869506fc48b24dcb9888912a46943a65744ea78a1d3f39783a0d6d1" Dec 13 13:20:08.455989 kubelet[3371]: I1213 13:20:08.455900 3371 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"99feda62a869506fc48b24dcb9888912a46943a65744ea78a1d3f39783a0d6d1"} err="failed to get container status \"99feda62a869506fc48b24dcb9888912a46943a65744ea78a1d3f39783a0d6d1\": rpc error: code = NotFound desc = an error occurred when try to find container \"99feda62a869506fc48b24dcb9888912a46943a65744ea78a1d3f39783a0d6d1\": not found" Dec 13 13:20:08.455989 kubelet[3371]: I1213 13:20:08.455989 3371 scope.go:117] "RemoveContainer" containerID="8a80b64fcd5d272d222210400e5e9bf35b082fcf8fa80eba5259d6e1dcad39d4" Dec 13 13:20:08.456254 containerd[1734]: time="2024-12-13T13:20:08.456224522Z" level=error msg="ContainerStatus for \"8a80b64fcd5d272d222210400e5e9bf35b082fcf8fa80eba5259d6e1dcad39d4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8a80b64fcd5d272d222210400e5e9bf35b082fcf8fa80eba5259d6e1dcad39d4\": not found" Dec 13 13:20:08.456548 kubelet[3371]: E1213 13:20:08.456519 3371 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8a80b64fcd5d272d222210400e5e9bf35b082fcf8fa80eba5259d6e1dcad39d4\": not found" containerID="8a80b64fcd5d272d222210400e5e9bf35b082fcf8fa80eba5259d6e1dcad39d4" Dec 13 13:20:08.456604 kubelet[3371]: I1213 13:20:08.456551 3371 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8a80b64fcd5d272d222210400e5e9bf35b082fcf8fa80eba5259d6e1dcad39d4"} err="failed to get container status \"8a80b64fcd5d272d222210400e5e9bf35b082fcf8fa80eba5259d6e1dcad39d4\": rpc error: code = NotFound desc = an error occurred when try to find container \"8a80b64fcd5d272d222210400e5e9bf35b082fcf8fa80eba5259d6e1dcad39d4\": not found" Dec 13 13:20:08.456604 kubelet[3371]: I1213 13:20:08.456576 3371 scope.go:117] "RemoveContainer" containerID="791baa6075bf070d06b7f3c8c558daa2bc98d7fdef628894a2c63ca5a06e8d61" Dec 13 13:20:08.456890 containerd[1734]: time="2024-12-13T13:20:08.456762523Z" level=error msg="ContainerStatus for \"791baa6075bf070d06b7f3c8c558daa2bc98d7fdef628894a2c63ca5a06e8d61\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"791baa6075bf070d06b7f3c8c558daa2bc98d7fdef628894a2c63ca5a06e8d61\": not found" Dec 13 13:20:08.457042 kubelet[3371]: E1213 13:20:08.457020 3371 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"791baa6075bf070d06b7f3c8c558daa2bc98d7fdef628894a2c63ca5a06e8d61\": not found" containerID="791baa6075bf070d06b7f3c8c558daa2bc98d7fdef628894a2c63ca5a06e8d61" Dec 13 13:20:08.457226 kubelet[3371]: I1213 13:20:08.457103 3371 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"791baa6075bf070d06b7f3c8c558daa2bc98d7fdef628894a2c63ca5a06e8d61"} err="failed to get container status \"791baa6075bf070d06b7f3c8c558daa2bc98d7fdef628894a2c63ca5a06e8d61\": rpc error: code = NotFound desc = an error occurred when try to find container \"791baa6075bf070d06b7f3c8c558daa2bc98d7fdef628894a2c63ca5a06e8d61\": not found" Dec 13 13:20:08.457226 kubelet[3371]: I1213 13:20:08.457143 3371 scope.go:117] "RemoveContainer" containerID="6f115bc0544ae7d2ab02bee88d5e4b8d8b6ee812eb2dcff8e1dd54971807b4a3" Dec 13 13:20:08.457406 containerd[1734]: time="2024-12-13T13:20:08.457362924Z" level=error msg="ContainerStatus for \"6f115bc0544ae7d2ab02bee88d5e4b8d8b6ee812eb2dcff8e1dd54971807b4a3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6f115bc0544ae7d2ab02bee88d5e4b8d8b6ee812eb2dcff8e1dd54971807b4a3\": not found" Dec 13 13:20:08.457612 kubelet[3371]: E1213 13:20:08.457541 3371 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6f115bc0544ae7d2ab02bee88d5e4b8d8b6ee812eb2dcff8e1dd54971807b4a3\": not found" containerID="6f115bc0544ae7d2ab02bee88d5e4b8d8b6ee812eb2dcff8e1dd54971807b4a3" Dec 13 13:20:08.457612 kubelet[3371]: I1213 13:20:08.457597 3371 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6f115bc0544ae7d2ab02bee88d5e4b8d8b6ee812eb2dcff8e1dd54971807b4a3"} err="failed to get container status \"6f115bc0544ae7d2ab02bee88d5e4b8d8b6ee812eb2dcff8e1dd54971807b4a3\": rpc error: code = NotFound desc = an error occurred when try to find container \"6f115bc0544ae7d2ab02bee88d5e4b8d8b6ee812eb2dcff8e1dd54971807b4a3\": not found" Dec 13 13:20:08.457728 kubelet[3371]: I1213 13:20:08.457617 3371 scope.go:117] "RemoveContainer" containerID="0ee72be1d968b1d144dec1143467b199b1c372090842dd0fc191f349a6efad36" Dec 13 13:20:08.458036 containerd[1734]: time="2024-12-13T13:20:08.457951126Z" level=error msg="ContainerStatus for \"0ee72be1d968b1d144dec1143467b199b1c372090842dd0fc191f349a6efad36\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0ee72be1d968b1d144dec1143467b199b1c372090842dd0fc191f349a6efad36\": not found" Dec 13 13:20:08.458124 kubelet[3371]: E1213 13:20:08.458105 3371 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0ee72be1d968b1d144dec1143467b199b1c372090842dd0fc191f349a6efad36\": not found" containerID="0ee72be1d968b1d144dec1143467b199b1c372090842dd0fc191f349a6efad36" Dec 13 13:20:08.458186 kubelet[3371]: I1213 13:20:08.458142 3371 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0ee72be1d968b1d144dec1143467b199b1c372090842dd0fc191f349a6efad36"} err="failed to get container status \"0ee72be1d968b1d144dec1143467b199b1c372090842dd0fc191f349a6efad36\": rpc error: code = NotFound desc = an error occurred when try to find container \"0ee72be1d968b1d144dec1143467b199b1c372090842dd0fc191f349a6efad36\": not found" Dec 13 13:20:08.458210 kubelet[3371]: I1213 13:20:08.458187 3371 scope.go:117] "RemoveContainer" containerID="dc6825393fcf4bdaf8ad72acbaa9d60d7f008247900d872ecb0c909d2e79b7f4" Dec 13 13:20:08.459620 containerd[1734]: time="2024-12-13T13:20:08.459579610Z" level=info msg="RemoveContainer for \"dc6825393fcf4bdaf8ad72acbaa9d60d7f008247900d872ecb0c909d2e79b7f4\"" Dec 13 13:20:08.467453 containerd[1734]: time="2024-12-13T13:20:08.467410668Z" level=info msg="RemoveContainer for \"dc6825393fcf4bdaf8ad72acbaa9d60d7f008247900d872ecb0c909d2e79b7f4\" returns successfully" Dec 13 13:20:08.467802 kubelet[3371]: I1213 13:20:08.467693 3371 scope.go:117] "RemoveContainer" containerID="dc6825393fcf4bdaf8ad72acbaa9d60d7f008247900d872ecb0c909d2e79b7f4" Dec 13 13:20:08.468165 containerd[1734]: time="2024-12-13T13:20:08.468086990Z" level=error msg="ContainerStatus for \"dc6825393fcf4bdaf8ad72acbaa9d60d7f008247900d872ecb0c909d2e79b7f4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dc6825393fcf4bdaf8ad72acbaa9d60d7f008247900d872ecb0c909d2e79b7f4\": not found" Dec 13 13:20:08.468446 kubelet[3371]: E1213 13:20:08.468374 3371 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dc6825393fcf4bdaf8ad72acbaa9d60d7f008247900d872ecb0c909d2e79b7f4\": not found" containerID="dc6825393fcf4bdaf8ad72acbaa9d60d7f008247900d872ecb0c909d2e79b7f4" Dec 13 13:20:08.468446 kubelet[3371]: I1213 13:20:08.468417 3371 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dc6825393fcf4bdaf8ad72acbaa9d60d7f008247900d872ecb0c909d2e79b7f4"} err="failed to get container status \"dc6825393fcf4bdaf8ad72acbaa9d60d7f008247900d872ecb0c909d2e79b7f4\": rpc error: code = NotFound desc = an error occurred when try to find container \"dc6825393fcf4bdaf8ad72acbaa9d60d7f008247900d872ecb0c909d2e79b7f4\": not found" Dec 13 13:20:08.901872 kubelet[3371]: I1213 13:20:08.901781 3371 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8586645d-c092-47bf-a9c8-56dd9e82e2c7" path="/var/lib/kubelet/pods/8586645d-c092-47bf-a9c8-56dd9e82e2c7/volumes" Dec 13 13:20:08.902661 kubelet[3371]: I1213 13:20:08.902632 3371 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfdc435f-db7f-4c33-ae6b-9b68ec6f47be" path="/var/lib/kubelet/pods/bfdc435f-db7f-4c33-ae6b-9b68ec6f47be/volumes" Dec 13 13:20:09.336310 sshd[4988]: Connection closed by 10.200.16.10 port 51542 Dec 13 13:20:09.337047 sshd-session[4985]: pam_unix(sshd:session): session closed for user core Dec 13 13:20:09.340800 systemd[1]: sshd@22-10.200.20.34:22-10.200.16.10:51542.service: Deactivated successfully. Dec 13 13:20:09.342924 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 13:20:09.343788 systemd[1]: session-25.scope: Consumed 1.040s CPU time. Dec 13 13:20:09.344558 systemd-logind[1699]: Session 25 logged out. Waiting for processes to exit. Dec 13 13:20:09.346206 systemd-logind[1699]: Removed session 25. Dec 13 13:20:09.427474 systemd[1]: Started sshd@23-10.200.20.34:22-10.200.16.10:39812.service - OpenSSH per-connection server daemon (10.200.16.10:39812). Dec 13 13:20:09.855920 sshd[5147]: Accepted publickey for core from 10.200.16.10 port 39812 ssh2: RSA SHA256:s/ry0hNLnvKqnMQ9cPrjUFS9LNOYotk3LUB29ZrhvrI Dec 13 13:20:09.857413 sshd-session[5147]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:20:09.862368 systemd-logind[1699]: New session 26 of user core. Dec 13 13:20:09.868319 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 13 13:20:11.138467 kubelet[3371]: I1213 13:20:11.138410 3371 topology_manager.go:215] "Topology Admit Handler" podUID="02ad1c7d-c020-487b-8ac2-c92b20ab75c2" podNamespace="kube-system" podName="cilium-dhs9c" Dec 13 13:20:11.138974 kubelet[3371]: E1213 13:20:11.138483 3371 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8586645d-c092-47bf-a9c8-56dd9e82e2c7" containerName="mount-cgroup" Dec 13 13:20:11.138974 kubelet[3371]: E1213 13:20:11.138494 3371 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bfdc435f-db7f-4c33-ae6b-9b68ec6f47be" containerName="cilium-operator" Dec 13 13:20:11.138974 kubelet[3371]: E1213 13:20:11.138500 3371 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8586645d-c092-47bf-a9c8-56dd9e82e2c7" containerName="cilium-agent" Dec 13 13:20:11.138974 kubelet[3371]: E1213 13:20:11.138505 3371 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8586645d-c092-47bf-a9c8-56dd9e82e2c7" containerName="apply-sysctl-overwrites" Dec 13 13:20:11.138974 kubelet[3371]: E1213 13:20:11.138511 3371 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8586645d-c092-47bf-a9c8-56dd9e82e2c7" containerName="mount-bpf-fs" Dec 13 13:20:11.138974 kubelet[3371]: E1213 13:20:11.138517 3371 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8586645d-c092-47bf-a9c8-56dd9e82e2c7" containerName="clean-cilium-state" Dec 13 13:20:11.138974 kubelet[3371]: I1213 13:20:11.138538 3371 memory_manager.go:354] "RemoveStaleState removing state" podUID="8586645d-c092-47bf-a9c8-56dd9e82e2c7" containerName="cilium-agent" Dec 13 13:20:11.138974 kubelet[3371]: I1213 13:20:11.138544 3371 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfdc435f-db7f-4c33-ae6b-9b68ec6f47be" containerName="cilium-operator" Dec 13 13:20:11.150225 systemd[1]: Created slice kubepods-burstable-pod02ad1c7d_c020_487b_8ac2_c92b20ab75c2.slice - libcontainer container kubepods-burstable-pod02ad1c7d_c020_487b_8ac2_c92b20ab75c2.slice. Dec 13 13:20:11.177088 sshd[5151]: Connection closed by 10.200.16.10 port 39812 Dec 13 13:20:11.177714 sshd-session[5147]: pam_unix(sshd:session): session closed for user core Dec 13 13:20:11.186235 systemd[1]: sshd@23-10.200.20.34:22-10.200.16.10:39812.service: Deactivated successfully. Dec 13 13:20:11.190788 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 13:20:11.194234 systemd-logind[1699]: Session 26 logged out. Waiting for processes to exit. Dec 13 13:20:11.197609 systemd-logind[1699]: Removed session 26. Dec 13 13:20:11.253841 systemd[1]: Started sshd@24-10.200.20.34:22-10.200.16.10:39814.service - OpenSSH per-connection server daemon (10.200.16.10:39814). Dec 13 13:20:11.306091 kubelet[3371]: I1213 13:20:11.305968 3371 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/02ad1c7d-c020-487b-8ac2-c92b20ab75c2-etc-cni-netd\") pod \"cilium-dhs9c\" (UID: \"02ad1c7d-c020-487b-8ac2-c92b20ab75c2\") " pod="kube-system/cilium-dhs9c" Dec 13 13:20:11.306091 kubelet[3371]: I1213 13:20:11.306009 3371 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/02ad1c7d-c020-487b-8ac2-c92b20ab75c2-host-proc-sys-net\") pod \"cilium-dhs9c\" (UID: \"02ad1c7d-c020-487b-8ac2-c92b20ab75c2\") " pod="kube-system/cilium-dhs9c" Dec 13 13:20:11.306091 kubelet[3371]: I1213 13:20:11.306031 3371 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/02ad1c7d-c020-487b-8ac2-c92b20ab75c2-cilium-config-path\") pod \"cilium-dhs9c\" (UID: \"02ad1c7d-c020-487b-8ac2-c92b20ab75c2\") " pod="kube-system/cilium-dhs9c" Dec 13 13:20:11.306091 kubelet[3371]: I1213 13:20:11.306047 3371 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/02ad1c7d-c020-487b-8ac2-c92b20ab75c2-cilium-ipsec-secrets\") pod \"cilium-dhs9c\" (UID: \"02ad1c7d-c020-487b-8ac2-c92b20ab75c2\") " pod="kube-system/cilium-dhs9c" Dec 13 13:20:11.306091 kubelet[3371]: I1213 13:20:11.306068 3371 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/02ad1c7d-c020-487b-8ac2-c92b20ab75c2-cilium-cgroup\") pod \"cilium-dhs9c\" (UID: \"02ad1c7d-c020-487b-8ac2-c92b20ab75c2\") " pod="kube-system/cilium-dhs9c" Dec 13 13:20:11.306422 kubelet[3371]: I1213 13:20:11.306115 3371 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/02ad1c7d-c020-487b-8ac2-c92b20ab75c2-lib-modules\") pod \"cilium-dhs9c\" (UID: \"02ad1c7d-c020-487b-8ac2-c92b20ab75c2\") " pod="kube-system/cilium-dhs9c" Dec 13 13:20:11.306422 kubelet[3371]: I1213 13:20:11.306178 3371 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/02ad1c7d-c020-487b-8ac2-c92b20ab75c2-host-proc-sys-kernel\") pod \"cilium-dhs9c\" (UID: \"02ad1c7d-c020-487b-8ac2-c92b20ab75c2\") " pod="kube-system/cilium-dhs9c" Dec 13 13:20:11.306422 kubelet[3371]: I1213 13:20:11.306206 3371 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/02ad1c7d-c020-487b-8ac2-c92b20ab75c2-hubble-tls\") pod \"cilium-dhs9c\" (UID: \"02ad1c7d-c020-487b-8ac2-c92b20ab75c2\") " pod="kube-system/cilium-dhs9c" Dec 13 13:20:11.306422 kubelet[3371]: I1213 13:20:11.306230 3371 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/02ad1c7d-c020-487b-8ac2-c92b20ab75c2-cilium-run\") pod \"cilium-dhs9c\" (UID: \"02ad1c7d-c020-487b-8ac2-c92b20ab75c2\") " pod="kube-system/cilium-dhs9c" Dec 13 13:20:11.306422 kubelet[3371]: I1213 13:20:11.306249 3371 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/02ad1c7d-c020-487b-8ac2-c92b20ab75c2-clustermesh-secrets\") pod \"cilium-dhs9c\" (UID: \"02ad1c7d-c020-487b-8ac2-c92b20ab75c2\") " pod="kube-system/cilium-dhs9c" Dec 13 13:20:11.306422 kubelet[3371]: I1213 13:20:11.306264 3371 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkgds\" (UniqueName: \"kubernetes.io/projected/02ad1c7d-c020-487b-8ac2-c92b20ab75c2-kube-api-access-lkgds\") pod \"cilium-dhs9c\" (UID: \"02ad1c7d-c020-487b-8ac2-c92b20ab75c2\") " pod="kube-system/cilium-dhs9c" Dec 13 13:20:11.306564 kubelet[3371]: I1213 13:20:11.306289 3371 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/02ad1c7d-c020-487b-8ac2-c92b20ab75c2-bpf-maps\") pod \"cilium-dhs9c\" (UID: \"02ad1c7d-c020-487b-8ac2-c92b20ab75c2\") " pod="kube-system/cilium-dhs9c" Dec 13 13:20:11.306564 kubelet[3371]: I1213 13:20:11.306314 3371 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/02ad1c7d-c020-487b-8ac2-c92b20ab75c2-hostproc\") pod \"cilium-dhs9c\" (UID: \"02ad1c7d-c020-487b-8ac2-c92b20ab75c2\") " pod="kube-system/cilium-dhs9c" Dec 13 13:20:11.306564 kubelet[3371]: I1213 13:20:11.306332 3371 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/02ad1c7d-c020-487b-8ac2-c92b20ab75c2-cni-path\") pod \"cilium-dhs9c\" (UID: \"02ad1c7d-c020-487b-8ac2-c92b20ab75c2\") " pod="kube-system/cilium-dhs9c" Dec 13 13:20:11.306564 kubelet[3371]: I1213 13:20:11.306350 3371 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/02ad1c7d-c020-487b-8ac2-c92b20ab75c2-xtables-lock\") pod \"cilium-dhs9c\" (UID: \"02ad1c7d-c020-487b-8ac2-c92b20ab75c2\") " pod="kube-system/cilium-dhs9c" Dec 13 13:20:11.455927 containerd[1734]: time="2024-12-13T13:20:11.455749204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dhs9c,Uid:02ad1c7d-c020-487b-8ac2-c92b20ab75c2,Namespace:kube-system,Attempt:0,}" Dec 13 13:20:11.506196 containerd[1734]: time="2024-12-13T13:20:11.505254361Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:20:11.506196 containerd[1734]: time="2024-12-13T13:20:11.505318961Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:20:11.506196 containerd[1734]: time="2024-12-13T13:20:11.505331081Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:20:11.506196 containerd[1734]: time="2024-12-13T13:20:11.505417242Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:20:11.525375 systemd[1]: Started cri-containerd-fb3a3096438cabbc3c95980fedd3dee53b9a24a3c89f29ccec4c098ff9d7e007.scope - libcontainer container fb3a3096438cabbc3c95980fedd3dee53b9a24a3c89f29ccec4c098ff9d7e007. Dec 13 13:20:11.549250 containerd[1734]: time="2024-12-13T13:20:11.549113185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dhs9c,Uid:02ad1c7d-c020-487b-8ac2-c92b20ab75c2,Namespace:kube-system,Attempt:0,} returns sandbox id \"fb3a3096438cabbc3c95980fedd3dee53b9a24a3c89f29ccec4c098ff9d7e007\"" Dec 13 13:20:11.553675 containerd[1734]: time="2024-12-13T13:20:11.553552875Z" level=info msg="CreateContainer within sandbox \"fb3a3096438cabbc3c95980fedd3dee53b9a24a3c89f29ccec4c098ff9d7e007\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 13:20:11.628301 containerd[1734]: time="2024-12-13T13:20:11.628242812Z" level=info msg="CreateContainer within sandbox \"fb3a3096438cabbc3c95980fedd3dee53b9a24a3c89f29ccec4c098ff9d7e007\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4151d5d79337c7dcd1276e49645790f20cc579ad9af66cffbc922effc599ede6\"" Dec 13 13:20:11.629334 containerd[1734]: time="2024-12-13T13:20:11.629039613Z" level=info msg="StartContainer for \"4151d5d79337c7dcd1276e49645790f20cc579ad9af66cffbc922effc599ede6\"" Dec 13 13:20:11.658358 systemd[1]: Started cri-containerd-4151d5d79337c7dcd1276e49645790f20cc579ad9af66cffbc922effc599ede6.scope - libcontainer container 4151d5d79337c7dcd1276e49645790f20cc579ad9af66cffbc922effc599ede6. Dec 13 13:20:11.687427 containerd[1734]: time="2024-12-13T13:20:11.687362871Z" level=info msg="StartContainer for \"4151d5d79337c7dcd1276e49645790f20cc579ad9af66cffbc922effc599ede6\" returns successfully" Dec 13 13:20:11.694374 systemd[1]: cri-containerd-4151d5d79337c7dcd1276e49645790f20cc579ad9af66cffbc922effc599ede6.scope: Deactivated successfully. Dec 13 13:20:11.702834 sshd[5161]: Accepted publickey for core from 10.200.16.10 port 39814 ssh2: RSA SHA256:s/ry0hNLnvKqnMQ9cPrjUFS9LNOYotk3LUB29ZrhvrI Dec 13 13:20:11.704570 sshd-session[5161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:20:11.711615 systemd-logind[1699]: New session 27 of user core. Dec 13 13:20:11.719386 systemd[1]: Started session-27.scope - Session 27 of User core. Dec 13 13:20:11.793990 containerd[1734]: time="2024-12-13T13:20:11.793910643Z" level=info msg="shim disconnected" id=4151d5d79337c7dcd1276e49645790f20cc579ad9af66cffbc922effc599ede6 namespace=k8s.io Dec 13 13:20:11.793990 containerd[1734]: time="2024-12-13T13:20:11.793977963Z" level=warning msg="cleaning up after shim disconnected" id=4151d5d79337c7dcd1276e49645790f20cc579ad9af66cffbc922effc599ede6 namespace=k8s.io Dec 13 13:20:11.793990 containerd[1734]: time="2024-12-13T13:20:11.793987203Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:20:12.019798 sshd[5258]: Connection closed by 10.200.16.10 port 39814 Dec 13 13:20:12.020605 sshd-session[5161]: pam_unix(sshd:session): session closed for user core Dec 13 13:20:12.025740 systemd[1]: sshd@24-10.200.20.34:22-10.200.16.10:39814.service: Deactivated successfully. Dec 13 13:20:12.026476 kubelet[3371]: E1213 13:20:12.026123 3371 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 13:20:12.029632 systemd[1]: session-27.scope: Deactivated successfully. Dec 13 13:20:12.031214 systemd-logind[1699]: Session 27 logged out. Waiting for processes to exit. Dec 13 13:20:12.032396 systemd-logind[1699]: Removed session 27. Dec 13 13:20:12.105498 systemd[1]: Started sshd@25-10.200.20.34:22-10.200.16.10:39828.service - OpenSSH per-connection server daemon (10.200.16.10:39828). Dec 13 13:20:12.405912 containerd[1734]: time="2024-12-13T13:20:12.405726727Z" level=info msg="CreateContainer within sandbox \"fb3a3096438cabbc3c95980fedd3dee53b9a24a3c89f29ccec4c098ff9d7e007\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 13:20:12.436389 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1386615065.mount: Deactivated successfully. Dec 13 13:20:12.439267 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount113753495.mount: Deactivated successfully. Dec 13 13:20:12.447794 containerd[1734]: time="2024-12-13T13:20:12.447667186Z" level=info msg="CreateContainer within sandbox \"fb3a3096438cabbc3c95980fedd3dee53b9a24a3c89f29ccec4c098ff9d7e007\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"05bd18625a7de4ff3dacee88fe160b639ba36cb84e9ee6112aebbf94c99b1032\"" Dec 13 13:20:12.448872 containerd[1734]: time="2024-12-13T13:20:12.448533428Z" level=info msg="StartContainer for \"05bd18625a7de4ff3dacee88fe160b639ba36cb84e9ee6112aebbf94c99b1032\"" Dec 13 13:20:12.480402 systemd[1]: Started cri-containerd-05bd18625a7de4ff3dacee88fe160b639ba36cb84e9ee6112aebbf94c99b1032.scope - libcontainer container 05bd18625a7de4ff3dacee88fe160b639ba36cb84e9ee6112aebbf94c99b1032. Dec 13 13:20:12.508869 containerd[1734]: time="2024-12-13T13:20:12.508704171Z" level=info msg="StartContainer for \"05bd18625a7de4ff3dacee88fe160b639ba36cb84e9ee6112aebbf94c99b1032\" returns successfully" Dec 13 13:20:12.512309 systemd[1]: cri-containerd-05bd18625a7de4ff3dacee88fe160b639ba36cb84e9ee6112aebbf94c99b1032.scope: Deactivated successfully. Dec 13 13:20:12.540027 sshd[5276]: Accepted publickey for core from 10.200.16.10 port 39828 ssh2: RSA SHA256:s/ry0hNLnvKqnMQ9cPrjUFS9LNOYotk3LUB29ZrhvrI Dec 13 13:20:12.541671 sshd-session[5276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:20:12.546399 systemd-logind[1699]: New session 28 of user core. Dec 13 13:20:12.550361 systemd[1]: Started session-28.scope - Session 28 of User core. Dec 13 13:20:12.551441 containerd[1734]: time="2024-12-13T13:20:12.549366267Z" level=info msg="shim disconnected" id=05bd18625a7de4ff3dacee88fe160b639ba36cb84e9ee6112aebbf94c99b1032 namespace=k8s.io Dec 13 13:20:12.551441 containerd[1734]: time="2024-12-13T13:20:12.549426187Z" level=warning msg="cleaning up after shim disconnected" id=05bd18625a7de4ff3dacee88fe160b639ba36cb84e9ee6112aebbf94c99b1032 namespace=k8s.io Dec 13 13:20:12.551441 containerd[1734]: time="2024-12-13T13:20:12.549433947Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:20:12.565313 containerd[1734]: time="2024-12-13T13:20:12.565240704Z" level=warning msg="cleanup warnings time=\"2024-12-13T13:20:12Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 13:20:13.412755 containerd[1734]: time="2024-12-13T13:20:13.412590665Z" level=info msg="CreateContainer within sandbox \"fb3a3096438cabbc3c95980fedd3dee53b9a24a3c89f29ccec4c098ff9d7e007\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 13:20:13.414783 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-05bd18625a7de4ff3dacee88fe160b639ba36cb84e9ee6112aebbf94c99b1032-rootfs.mount: Deactivated successfully. Dec 13 13:20:13.448299 containerd[1734]: time="2024-12-13T13:20:13.448248909Z" level=info msg="CreateContainer within sandbox \"fb3a3096438cabbc3c95980fedd3dee53b9a24a3c89f29ccec4c098ff9d7e007\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"fb72d7eaab908d377409f5f4155006699b563e0188cbfffe452a34493a84beb5\"" Dec 13 13:20:13.449786 containerd[1734]: time="2024-12-13T13:20:13.449740473Z" level=info msg="StartContainer for \"fb72d7eaab908d377409f5f4155006699b563e0188cbfffe452a34493a84beb5\"" Dec 13 13:20:13.476330 systemd[1]: run-containerd-runc-k8s.io-fb72d7eaab908d377409f5f4155006699b563e0188cbfffe452a34493a84beb5-runc.HPdrxo.mount: Deactivated successfully. Dec 13 13:20:13.482318 systemd[1]: Started cri-containerd-fb72d7eaab908d377409f5f4155006699b563e0188cbfffe452a34493a84beb5.scope - libcontainer container fb72d7eaab908d377409f5f4155006699b563e0188cbfffe452a34493a84beb5. Dec 13 13:20:13.512212 systemd[1]: cri-containerd-fb72d7eaab908d377409f5f4155006699b563e0188cbfffe452a34493a84beb5.scope: Deactivated successfully. Dec 13 13:20:13.515190 containerd[1734]: time="2024-12-13T13:20:13.513915144Z" level=info msg="StartContainer for \"fb72d7eaab908d377409f5f4155006699b563e0188cbfffe452a34493a84beb5\" returns successfully" Dec 13 13:20:13.551789 containerd[1734]: time="2024-12-13T13:20:13.551663273Z" level=info msg="shim disconnected" id=fb72d7eaab908d377409f5f4155006699b563e0188cbfffe452a34493a84beb5 namespace=k8s.io Dec 13 13:20:13.551789 containerd[1734]: time="2024-12-13T13:20:13.551730353Z" level=warning msg="cleaning up after shim disconnected" id=fb72d7eaab908d377409f5f4155006699b563e0188cbfffe452a34493a84beb5 namespace=k8s.io Dec 13 13:20:13.551789 containerd[1734]: time="2024-12-13T13:20:13.551737993Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:20:14.415117 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fb72d7eaab908d377409f5f4155006699b563e0188cbfffe452a34493a84beb5-rootfs.mount: Deactivated successfully. Dec 13 13:20:14.423666 containerd[1734]: time="2024-12-13T13:20:14.423383692Z" level=info msg="CreateContainer within sandbox \"fb3a3096438cabbc3c95980fedd3dee53b9a24a3c89f29ccec4c098ff9d7e007\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 13:20:14.466170 containerd[1734]: time="2024-12-13T13:20:14.466090436Z" level=info msg="CreateContainer within sandbox \"fb3a3096438cabbc3c95980fedd3dee53b9a24a3c89f29ccec4c098ff9d7e007\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"29a9086cfd1731b5521d86f556c0186bce3ba28c4f7a067d16afafba87300bfc\"" Dec 13 13:20:14.467448 containerd[1734]: time="2024-12-13T13:20:14.467400776Z" level=info msg="StartContainer for \"29a9086cfd1731b5521d86f556c0186bce3ba28c4f7a067d16afafba87300bfc\"" Dec 13 13:20:14.513388 systemd[1]: Started cri-containerd-29a9086cfd1731b5521d86f556c0186bce3ba28c4f7a067d16afafba87300bfc.scope - libcontainer container 29a9086cfd1731b5521d86f556c0186bce3ba28c4f7a067d16afafba87300bfc. Dec 13 13:20:14.565963 systemd[1]: cri-containerd-29a9086cfd1731b5521d86f556c0186bce3ba28c4f7a067d16afafba87300bfc.scope: Deactivated successfully. Dec 13 13:20:14.572473 containerd[1734]: time="2024-12-13T13:20:14.571808577Z" level=info msg="StartContainer for \"29a9086cfd1731b5521d86f556c0186bce3ba28c4f7a067d16afafba87300bfc\" returns successfully" Dec 13 13:20:14.609411 containerd[1734]: time="2024-12-13T13:20:14.609290192Z" level=info msg="shim disconnected" id=29a9086cfd1731b5521d86f556c0186bce3ba28c4f7a067d16afafba87300bfc namespace=k8s.io Dec 13 13:20:14.609411 containerd[1734]: time="2024-12-13T13:20:14.609408554Z" level=warning msg="cleaning up after shim disconnected" id=29a9086cfd1731b5521d86f556c0186bce3ba28c4f7a067d16afafba87300bfc namespace=k8s.io Dec 13 13:20:14.609663 containerd[1734]: time="2024-12-13T13:20:14.609421794Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:20:15.415261 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-29a9086cfd1731b5521d86f556c0186bce3ba28c4f7a067d16afafba87300bfc-rootfs.mount: Deactivated successfully. Dec 13 13:20:15.428707 containerd[1734]: time="2024-12-13T13:20:15.428316231Z" level=info msg="CreateContainer within sandbox \"fb3a3096438cabbc3c95980fedd3dee53b9a24a3c89f29ccec4c098ff9d7e007\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 13:20:15.463696 containerd[1734]: time="2024-12-13T13:20:15.463644813Z" level=info msg="CreateContainer within sandbox \"fb3a3096438cabbc3c95980fedd3dee53b9a24a3c89f29ccec4c098ff9d7e007\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e2a2da4dea6adf89ea57de200f049fcf02c73a5ef01017c34ffad8553946d2a5\"" Dec 13 13:20:15.465565 containerd[1734]: time="2024-12-13T13:20:15.464522746Z" level=info msg="StartContainer for \"e2a2da4dea6adf89ea57de200f049fcf02c73a5ef01017c34ffad8553946d2a5\"" Dec 13 13:20:15.496360 systemd[1]: Started cri-containerd-e2a2da4dea6adf89ea57de200f049fcf02c73a5ef01017c34ffad8553946d2a5.scope - libcontainer container e2a2da4dea6adf89ea57de200f049fcf02c73a5ef01017c34ffad8553946d2a5. Dec 13 13:20:15.527068 containerd[1734]: time="2024-12-13T13:20:15.527014424Z" level=info msg="StartContainer for \"e2a2da4dea6adf89ea57de200f049fcf02c73a5ef01017c34ffad8553946d2a5\" returns successfully" Dec 13 13:20:15.962165 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Dec 13 13:20:16.453428 kubelet[3371]: I1213 13:20:16.453355 3371 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-dhs9c" podStartSLOduration=5.453335432 podStartE2EDuration="5.453335432s" podCreationTimestamp="2024-12-13 13:20:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:20:16.45266595 +0000 UTC m=+209.889968307" watchObservedRunningTime="2024-12-13 13:20:16.453335432 +0000 UTC m=+209.890637829" Dec 13 13:20:18.780558 systemd-networkd[1451]: lxc_health: Link UP Dec 13 13:20:18.794312 systemd-networkd[1451]: lxc_health: Gained carrier Dec 13 13:20:20.775319 systemd-networkd[1451]: lxc_health: Gained IPv6LL Dec 13 13:20:25.679165 sshd[5331]: Connection closed by 10.200.16.10 port 39828 Dec 13 13:20:25.679978 sshd-session[5276]: pam_unix(sshd:session): session closed for user core Dec 13 13:20:25.684878 systemd[1]: sshd@25-10.200.20.34:22-10.200.16.10:39828.service: Deactivated successfully. Dec 13 13:20:25.687240 systemd[1]: session-28.scope: Deactivated successfully. Dec 13 13:20:25.688195 systemd-logind[1699]: Session 28 logged out. Waiting for processes to exit. Dec 13 13:20:25.689765 systemd-logind[1699]: Removed session 28.