Dec 13 01:25:38.339759 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Dec 13 01:25:38.339781 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu Dec 12 23:24:21 -00 2024 Dec 13 01:25:38.339789 kernel: KASLR enabled Dec 13 01:25:38.339795 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Dec 13 01:25:38.339802 kernel: printk: bootconsole [pl11] enabled Dec 13 01:25:38.339808 kernel: efi: EFI v2.7 by EDK II Dec 13 01:25:38.339815 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f214018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Dec 13 01:25:38.339821 kernel: random: crng init done Dec 13 01:25:38.339827 kernel: ACPI: Early table checksum verification disabled Dec 13 01:25:38.339833 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Dec 13 01:25:38.339839 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:25:38.339846 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:25:38.339853 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Dec 13 01:25:38.339859 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:25:38.339867 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:25:38.339873 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:25:38.339879 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:25:38.339887 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:25:38.339894 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:25:38.339900 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Dec 13 01:25:38.339907 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:25:38.339913 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Dec 13 01:25:38.339919 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Dec 13 01:25:38.339926 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Dec 13 01:25:38.339932 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Dec 13 01:25:38.339939 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Dec 13 01:25:38.339945 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Dec 13 01:25:38.339951 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Dec 13 01:25:38.339959 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Dec 13 01:25:38.339966 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Dec 13 01:25:38.339972 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Dec 13 01:25:38.339978 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Dec 13 01:25:38.339985 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Dec 13 01:25:38.339991 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Dec 13 01:25:38.339998 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Dec 13 01:25:38.340018 kernel: Zone ranges: Dec 13 01:25:38.340026 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Dec 13 01:25:38.340032 kernel: DMA32 empty Dec 13 01:25:38.340039 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Dec 13 01:25:38.340045 kernel: Movable zone start for each node Dec 13 01:25:38.340056 kernel: Early memory node ranges Dec 13 01:25:38.340063 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Dec 13 01:25:38.340070 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Dec 13 01:25:38.340076 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Dec 13 01:25:38.340083 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Dec 13 01:25:38.340091 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Dec 13 01:25:38.340098 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Dec 13 01:25:38.340105 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Dec 13 01:25:38.340112 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Dec 13 01:25:38.340119 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Dec 13 01:25:38.340126 kernel: psci: probing for conduit method from ACPI. Dec 13 01:25:38.340132 kernel: psci: PSCIv1.1 detected in firmware. Dec 13 01:25:38.340139 kernel: psci: Using standard PSCI v0.2 function IDs Dec 13 01:25:38.340146 kernel: psci: MIGRATE_INFO_TYPE not supported. Dec 13 01:25:38.340153 kernel: psci: SMC Calling Convention v1.4 Dec 13 01:25:38.340159 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Dec 13 01:25:38.340166 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Dec 13 01:25:38.340175 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Dec 13 01:25:38.340181 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Dec 13 01:25:38.340188 kernel: pcpu-alloc: [0] 0 [0] 1 Dec 13 01:25:38.340195 kernel: Detected PIPT I-cache on CPU0 Dec 13 01:25:38.340202 kernel: CPU features: detected: GIC system register CPU interface Dec 13 01:25:38.340209 kernel: CPU features: detected: Hardware dirty bit management Dec 13 01:25:38.340215 kernel: CPU features: detected: Spectre-BHB Dec 13 01:25:38.340222 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 13 01:25:38.340229 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 13 01:25:38.340236 kernel: CPU features: detected: ARM erratum 1418040 Dec 13 01:25:38.340243 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Dec 13 01:25:38.340251 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 13 01:25:38.340258 kernel: alternatives: applying boot alternatives Dec 13 01:25:38.340266 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=9494f75a68cfbdce95d0d2f9b58d6d75bc38ee5b4e31dfc2a6da695ffafefba6 Dec 13 01:25:38.340273 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 01:25:38.340280 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 01:25:38.340287 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 01:25:38.340294 kernel: Fallback order for Node 0: 0 Dec 13 01:25:38.340301 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Dec 13 01:25:38.340307 kernel: Policy zone: Normal Dec 13 01:25:38.340314 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 01:25:38.340321 kernel: software IO TLB: area num 2. Dec 13 01:25:38.340329 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Dec 13 01:25:38.340336 kernel: Memory: 3982756K/4194160K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39360K init, 897K bss, 211404K reserved, 0K cma-reserved) Dec 13 01:25:38.340343 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 01:25:38.340350 kernel: trace event string verifier disabled Dec 13 01:25:38.340357 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 01:25:38.340364 kernel: rcu: RCU event tracing is enabled. Dec 13 01:25:38.340372 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 01:25:38.340379 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 01:25:38.340385 kernel: Tracing variant of Tasks RCU enabled. Dec 13 01:25:38.340392 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 01:25:38.340399 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 01:25:38.340407 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 13 01:25:38.340414 kernel: GICv3: 960 SPIs implemented Dec 13 01:25:38.340421 kernel: GICv3: 0 Extended SPIs implemented Dec 13 01:25:38.340428 kernel: Root IRQ handler: gic_handle_irq Dec 13 01:25:38.340435 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Dec 13 01:25:38.340442 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Dec 13 01:25:38.340449 kernel: ITS: No ITS available, not enabling LPIs Dec 13 01:25:38.340456 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 01:25:38.340462 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 01:25:38.340469 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Dec 13 01:25:38.340477 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Dec 13 01:25:38.340484 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Dec 13 01:25:38.340492 kernel: Console: colour dummy device 80x25 Dec 13 01:25:38.340499 kernel: printk: console [tty1] enabled Dec 13 01:25:38.340506 kernel: ACPI: Core revision 20230628 Dec 13 01:25:38.340513 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Dec 13 01:25:38.340520 kernel: pid_max: default: 32768 minimum: 301 Dec 13 01:25:38.340527 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 01:25:38.340534 kernel: landlock: Up and running. Dec 13 01:25:38.340541 kernel: SELinux: Initializing. Dec 13 01:25:38.340548 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:25:38.340557 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:25:38.340564 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:25:38.340571 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:25:38.340578 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Dec 13 01:25:38.340585 kernel: Hyper-V: Host Build 10.0.22477.1594-1-0 Dec 13 01:25:38.340592 kernel: Hyper-V: enabling crash_kexec_post_notifiers Dec 13 01:25:38.340599 kernel: rcu: Hierarchical SRCU implementation. Dec 13 01:25:38.340612 kernel: rcu: Max phase no-delay instances is 400. Dec 13 01:25:38.340619 kernel: Remapping and enabling EFI services. Dec 13 01:25:38.340627 kernel: smp: Bringing up secondary CPUs ... Dec 13 01:25:38.340634 kernel: Detected PIPT I-cache on CPU1 Dec 13 01:25:38.340642 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Dec 13 01:25:38.340650 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 01:25:38.340657 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Dec 13 01:25:38.340665 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 01:25:38.340672 kernel: SMP: Total of 2 processors activated. Dec 13 01:25:38.340679 kernel: CPU features: detected: 32-bit EL0 Support Dec 13 01:25:38.340689 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Dec 13 01:25:38.340696 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 13 01:25:38.340703 kernel: CPU features: detected: CRC32 instructions Dec 13 01:25:38.340711 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 13 01:25:38.340718 kernel: CPU features: detected: LSE atomic instructions Dec 13 01:25:38.340725 kernel: CPU features: detected: Privileged Access Never Dec 13 01:25:38.340733 kernel: CPU: All CPU(s) started at EL1 Dec 13 01:25:38.340740 kernel: alternatives: applying system-wide alternatives Dec 13 01:25:38.340747 kernel: devtmpfs: initialized Dec 13 01:25:38.340756 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 01:25:38.340764 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 01:25:38.340771 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 01:25:38.340778 kernel: SMBIOS 3.1.0 present. Dec 13 01:25:38.340785 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Dec 13 01:25:38.340793 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 01:25:38.340800 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 13 01:25:38.340808 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 13 01:25:38.340817 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 13 01:25:38.340824 kernel: audit: initializing netlink subsys (disabled) Dec 13 01:25:38.340832 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Dec 13 01:25:38.340839 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 01:25:38.340846 kernel: cpuidle: using governor menu Dec 13 01:25:38.340854 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 13 01:25:38.340861 kernel: ASID allocator initialised with 32768 entries Dec 13 01:25:38.340868 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 01:25:38.340876 kernel: Serial: AMBA PL011 UART driver Dec 13 01:25:38.340884 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Dec 13 01:25:38.340892 kernel: Modules: 0 pages in range for non-PLT usage Dec 13 01:25:38.340899 kernel: Modules: 509040 pages in range for PLT usage Dec 13 01:25:38.340907 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 01:25:38.340914 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 01:25:38.340921 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 13 01:25:38.340929 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 13 01:25:38.340936 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 01:25:38.340943 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 01:25:38.340952 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 13 01:25:38.340959 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 13 01:25:38.340967 kernel: ACPI: Added _OSI(Module Device) Dec 13 01:25:38.340974 kernel: ACPI: Added _OSI(Processor Device) Dec 13 01:25:38.340981 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 01:25:38.340989 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 01:25:38.340996 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 01:25:38.345981 kernel: ACPI: Interpreter enabled Dec 13 01:25:38.346011 kernel: ACPI: Using GIC for interrupt routing Dec 13 01:25:38.346021 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Dec 13 01:25:38.346033 kernel: printk: console [ttyAMA0] enabled Dec 13 01:25:38.346041 kernel: printk: bootconsole [pl11] disabled Dec 13 01:25:38.346048 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Dec 13 01:25:38.346056 kernel: iommu: Default domain type: Translated Dec 13 01:25:38.346063 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 13 01:25:38.346071 kernel: efivars: Registered efivars operations Dec 13 01:25:38.346078 kernel: vgaarb: loaded Dec 13 01:25:38.346086 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 13 01:25:38.346093 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 01:25:38.346103 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 01:25:38.346111 kernel: pnp: PnP ACPI init Dec 13 01:25:38.346118 kernel: pnp: PnP ACPI: found 0 devices Dec 13 01:25:38.346125 kernel: NET: Registered PF_INET protocol family Dec 13 01:25:38.346133 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 01:25:38.346141 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 01:25:38.346148 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 01:25:38.346156 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 01:25:38.346165 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 13 01:25:38.346172 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 01:25:38.346180 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:25:38.346187 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:25:38.346195 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 01:25:38.346202 kernel: PCI: CLS 0 bytes, default 64 Dec 13 01:25:38.346209 kernel: kvm [1]: HYP mode not available Dec 13 01:25:38.346217 kernel: Initialise system trusted keyrings Dec 13 01:25:38.346224 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 01:25:38.346233 kernel: Key type asymmetric registered Dec 13 01:25:38.346240 kernel: Asymmetric key parser 'x509' registered Dec 13 01:25:38.346247 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 13 01:25:38.346255 kernel: io scheduler mq-deadline registered Dec 13 01:25:38.346262 kernel: io scheduler kyber registered Dec 13 01:25:38.346269 kernel: io scheduler bfq registered Dec 13 01:25:38.346277 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 01:25:38.346285 kernel: thunder_xcv, ver 1.0 Dec 13 01:25:38.346292 kernel: thunder_bgx, ver 1.0 Dec 13 01:25:38.346299 kernel: nicpf, ver 1.0 Dec 13 01:25:38.346308 kernel: nicvf, ver 1.0 Dec 13 01:25:38.346442 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 13 01:25:38.346515 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-12-13T01:25:37 UTC (1734053137) Dec 13 01:25:38.346526 kernel: efifb: probing for efifb Dec 13 01:25:38.346533 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Dec 13 01:25:38.346541 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Dec 13 01:25:38.346548 kernel: efifb: scrolling: redraw Dec 13 01:25:38.346558 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Dec 13 01:25:38.346565 kernel: Console: switching to colour frame buffer device 128x48 Dec 13 01:25:38.346573 kernel: fb0: EFI VGA frame buffer device Dec 13 01:25:38.346581 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Dec 13 01:25:38.346588 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 01:25:38.346595 kernel: No ACPI PMU IRQ for CPU0 Dec 13 01:25:38.346603 kernel: No ACPI PMU IRQ for CPU1 Dec 13 01:25:38.346610 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Dec 13 01:25:38.346618 kernel: watchdog: Delayed init of the lockup detector failed: -19 Dec 13 01:25:38.346627 kernel: watchdog: Hard watchdog permanently disabled Dec 13 01:25:38.346634 kernel: NET: Registered PF_INET6 protocol family Dec 13 01:25:38.346641 kernel: Segment Routing with IPv6 Dec 13 01:25:38.346649 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 01:25:38.346656 kernel: NET: Registered PF_PACKET protocol family Dec 13 01:25:38.346663 kernel: Key type dns_resolver registered Dec 13 01:25:38.346671 kernel: registered taskstats version 1 Dec 13 01:25:38.346678 kernel: Loading compiled-in X.509 certificates Dec 13 01:25:38.346685 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: d83da9ddb9e3c2439731828371f21d0232fd9ffb' Dec 13 01:25:38.346693 kernel: Key type .fscrypt registered Dec 13 01:25:38.346701 kernel: Key type fscrypt-provisioning registered Dec 13 01:25:38.346709 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 01:25:38.346716 kernel: ima: Allocated hash algorithm: sha1 Dec 13 01:25:38.346724 kernel: ima: No architecture policies found Dec 13 01:25:38.346731 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 13 01:25:38.346738 kernel: clk: Disabling unused clocks Dec 13 01:25:38.346746 kernel: Freeing unused kernel memory: 39360K Dec 13 01:25:38.346753 kernel: Run /init as init process Dec 13 01:25:38.346762 kernel: with arguments: Dec 13 01:25:38.346769 kernel: /init Dec 13 01:25:38.346776 kernel: with environment: Dec 13 01:25:38.346783 kernel: HOME=/ Dec 13 01:25:38.346791 kernel: TERM=linux Dec 13 01:25:38.346798 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 01:25:38.346807 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:25:38.346817 systemd[1]: Detected virtualization microsoft. Dec 13 01:25:38.346826 systemd[1]: Detected architecture arm64. Dec 13 01:25:38.346834 systemd[1]: Running in initrd. Dec 13 01:25:38.346841 systemd[1]: No hostname configured, using default hostname. Dec 13 01:25:38.346849 systemd[1]: Hostname set to . Dec 13 01:25:38.346857 systemd[1]: Initializing machine ID from random generator. Dec 13 01:25:38.346865 systemd[1]: Queued start job for default target initrd.target. Dec 13 01:25:38.346872 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:25:38.346880 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:25:38.346891 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 01:25:38.346899 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:25:38.346907 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 01:25:38.346915 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 01:25:38.346924 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 01:25:38.346933 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 01:25:38.346941 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:25:38.346950 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:25:38.346958 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:25:38.346966 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:25:38.346974 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:25:38.346982 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:25:38.346990 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:25:38.346998 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:25:38.347024 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:25:38.347036 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:25:38.347045 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:25:38.347053 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:25:38.347061 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:25:38.347069 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:25:38.347077 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 01:25:38.347085 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:25:38.347093 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 01:25:38.347101 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 01:25:38.347110 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:25:38.347118 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:25:38.347144 systemd-journald[217]: Collecting audit messages is disabled. Dec 13 01:25:38.347163 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:25:38.347174 systemd-journald[217]: Journal started Dec 13 01:25:38.347192 systemd-journald[217]: Runtime Journal (/run/log/journal/2528435aa5124a1f8efd8e5ac8383260) is 8.0M, max 78.5M, 70.5M free. Dec 13 01:25:38.341037 systemd-modules-load[218]: Inserted module 'overlay' Dec 13 01:25:38.362646 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:25:38.368943 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 01:25:38.400752 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 01:25:38.400777 kernel: Bridge firewalling registered Dec 13 01:25:38.392679 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:25:38.397558 systemd-modules-load[218]: Inserted module 'br_netfilter' Dec 13 01:25:38.411038 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 01:25:38.421186 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:25:38.433523 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:25:38.460318 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:25:38.475657 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:25:38.490835 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:25:38.502197 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:25:38.531787 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:25:38.538484 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:25:38.551917 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:25:38.580245 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:25:38.591182 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:25:38.623191 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:25:38.633371 systemd-resolved[250]: Positive Trust Anchors: Dec 13 01:25:38.633381 systemd-resolved[250]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:25:38.633414 systemd-resolved[250]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:25:38.635619 systemd-resolved[250]: Defaulting to hostname 'linux'. Dec 13 01:25:38.636959 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:25:38.653037 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:25:38.662785 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:25:38.722152 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 01:25:38.757808 dracut-cmdline[255]: dracut-dracut-053 Dec 13 01:25:38.762645 dracut-cmdline[255]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=9494f75a68cfbdce95d0d2f9b58d6d75bc38ee5b4e31dfc2a6da695ffafefba6 Dec 13 01:25:38.865095 kernel: SCSI subsystem initialized Dec 13 01:25:38.873021 kernel: Loading iSCSI transport class v2.0-870. Dec 13 01:25:38.884040 kernel: iscsi: registered transport (tcp) Dec 13 01:25:38.902022 kernel: iscsi: registered transport (qla4xxx) Dec 13 01:25:38.902054 kernel: QLogic iSCSI HBA Driver Dec 13 01:25:38.936978 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 01:25:38.954123 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 01:25:38.991027 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 01:25:38.991085 kernel: device-mapper: uevent: version 1.0.3 Dec 13 01:25:38.991102 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 01:25:39.048043 kernel: raid6: neonx8 gen() 15771 MB/s Dec 13 01:25:39.066019 kernel: raid6: neonx4 gen() 15670 MB/s Dec 13 01:25:39.086018 kernel: raid6: neonx2 gen() 13186 MB/s Dec 13 01:25:39.107020 kernel: raid6: neonx1 gen() 10480 MB/s Dec 13 01:25:39.127015 kernel: raid6: int64x8 gen() 6962 MB/s Dec 13 01:25:39.147018 kernel: raid6: int64x4 gen() 7359 MB/s Dec 13 01:25:39.168016 kernel: raid6: int64x2 gen() 6127 MB/s Dec 13 01:25:39.191941 kernel: raid6: int64x1 gen() 5061 MB/s Dec 13 01:25:39.191960 kernel: raid6: using algorithm neonx8 gen() 15771 MB/s Dec 13 01:25:39.217344 kernel: raid6: .... xor() 11940 MB/s, rmw enabled Dec 13 01:25:39.217368 kernel: raid6: using neon recovery algorithm Dec 13 01:25:39.228651 kernel: xor: measuring software checksum speed Dec 13 01:25:39.228678 kernel: 8regs : 19793 MB/sec Dec 13 01:25:39.235894 kernel: 32regs : 18378 MB/sec Dec 13 01:25:39.235916 kernel: arm64_neon : 27007 MB/sec Dec 13 01:25:39.240426 kernel: xor: using function: arm64_neon (27007 MB/sec) Dec 13 01:25:39.291024 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 01:25:39.302055 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:25:39.318159 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:25:39.340618 systemd-udevd[436]: Using default interface naming scheme 'v255'. Dec 13 01:25:39.346455 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:25:39.369261 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 01:25:39.387362 dracut-pre-trigger[456]: rd.md=0: removing MD RAID activation Dec 13 01:25:39.416984 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:25:39.433217 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:25:39.467234 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:25:39.491960 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 01:25:39.514609 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 01:25:39.529623 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:25:39.537524 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:25:39.559694 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:25:39.593049 kernel: hv_vmbus: Vmbus version:5.3 Dec 13 01:25:39.595123 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 01:25:39.614394 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:25:39.667339 kernel: hv_vmbus: registering driver hyperv_keyboard Dec 13 01:25:39.667368 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 01:25:39.667379 kernel: hv_vmbus: registering driver hv_netvsc Dec 13 01:25:39.667389 kernel: hv_vmbus: registering driver hid_hyperv Dec 13 01:25:39.667398 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Dec 13 01:25:39.667408 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Dec 13 01:25:39.667569 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 01:25:39.631174 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:25:39.631314 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:25:39.702871 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Dec 13 01:25:39.692002 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:25:39.683582 kernel: hv_vmbus: registering driver hv_storvsc Dec 13 01:25:39.709429 kernel: PTP clock support registered Dec 13 01:25:39.709446 kernel: hv_utils: Registering HyperV Utility Driver Dec 13 01:25:39.709454 kernel: hv_vmbus: registering driver hv_utils Dec 13 01:25:39.709464 kernel: hv_utils: Heartbeat IC version 3.0 Dec 13 01:25:39.709472 kernel: hv_utils: Shutdown IC version 3.2 Dec 13 01:25:39.709481 kernel: hv_utils: TimeSync IC version 4.0 Dec 13 01:25:39.709489 kernel: scsi host0: storvsc_host_t Dec 13 01:25:39.709629 kernel: scsi host1: storvsc_host_t Dec 13 01:25:39.709720 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Dec 13 01:25:39.709741 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Dec 13 01:25:39.709756 systemd-journald[217]: Time jumped backwards, rotating. Dec 13 01:25:39.726686 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:25:39.730025 kernel: hv_netvsc 000d3af5-8f33-000d-3af5-8f33000d3af5 eth0: VF slot 1 added Dec 13 01:25:39.726936 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:25:39.661403 systemd-resolved[250]: Clock change detected. Flushing caches. Dec 13 01:25:39.702758 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:25:39.728383 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:25:39.782206 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Dec 13 01:25:39.809529 kernel: hv_vmbus: registering driver hv_pci Dec 13 01:25:39.809547 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 01:25:39.809556 kernel: hv_pci ed800687-1df8-4250-b00e-c4aba553e961: PCI VMBus probing: Using version 0x10004 Dec 13 01:25:39.892347 kernel: hv_pci ed800687-1df8-4250-b00e-c4aba553e961: PCI host bridge to bus 1df8:00 Dec 13 01:25:39.892488 kernel: pci_bus 1df8:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Dec 13 01:25:39.892601 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Dec 13 01:25:39.892708 kernel: pci_bus 1df8:00: No busn resource found for root bus, will use [bus 00-ff] Dec 13 01:25:39.892790 kernel: pci 1df8:00:02.0: [15b3:1018] type 00 class 0x020000 Dec 13 01:25:39.892891 kernel: pci 1df8:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Dec 13 01:25:39.892979 kernel: pci 1df8:00:02.0: enabling Extended Tags Dec 13 01:25:39.893066 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Dec 13 01:25:39.916994 kernel: pci 1df8:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 1df8:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Dec 13 01:25:39.917142 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Dec 13 01:25:39.917265 kernel: pci_bus 1df8:00: busn_res: [bus 00-ff] end is updated to 00 Dec 13 01:25:39.917360 kernel: sd 0:0:0:0: [sda] Write Protect is off Dec 13 01:25:39.917456 kernel: pci 1df8:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Dec 13 01:25:39.917550 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Dec 13 01:25:39.917641 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Dec 13 01:25:39.917732 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:25:39.917741 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Dec 13 01:25:39.783740 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:25:39.824400 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:25:39.900769 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:25:39.973844 kernel: mlx5_core 1df8:00:02.0: enabling device (0000 -> 0002) Dec 13 01:25:40.188422 kernel: mlx5_core 1df8:00:02.0: firmware version: 16.30.1284 Dec 13 01:25:40.188551 kernel: hv_netvsc 000d3af5-8f33-000d-3af5-8f33000d3af5 eth0: VF registering: eth1 Dec 13 01:25:40.188654 kernel: mlx5_core 1df8:00:02.0 eth1: joined to eth0 Dec 13 01:25:40.188786 kernel: mlx5_core 1df8:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Dec 13 01:25:40.196177 kernel: mlx5_core 1df8:00:02.0 enP7672s1: renamed from eth1 Dec 13 01:25:40.360674 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Dec 13 01:25:40.419193 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (504) Dec 13 01:25:40.430251 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Dec 13 01:25:40.444725 kernel: BTRFS: device fsid 2893cd1e-612b-4262-912c-10787dc9c881 devid 1 transid 46 /dev/sda3 scanned by (udev-worker) (507) Dec 13 01:25:40.456094 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Dec 13 01:25:40.468319 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Dec 13 01:25:40.476521 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Dec 13 01:25:40.501336 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 01:25:40.526182 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:25:40.533187 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:25:41.542192 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:25:41.543020 disk-uuid[607]: The operation has completed successfully. Dec 13 01:25:41.594938 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 01:25:41.597128 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 01:25:41.639351 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 01:25:41.653006 sh[693]: Success Dec 13 01:25:41.680311 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Dec 13 01:25:41.901295 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 01:25:41.918297 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 01:25:41.924504 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 01:25:41.967299 kernel: BTRFS info (device dm-0): first mount of filesystem 2893cd1e-612b-4262-912c-10787dc9c881 Dec 13 01:25:41.967366 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:25:41.975091 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 01:25:41.980118 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 01:25:41.984361 kernel: BTRFS info (device dm-0): using free space tree Dec 13 01:25:42.389351 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 01:25:42.395042 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 01:25:42.412419 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 01:25:42.421352 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 01:25:42.460485 kernel: BTRFS info (device sda6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:25:42.460534 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:25:42.465758 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:25:42.486224 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 01:25:42.502354 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 01:25:42.508565 kernel: BTRFS info (device sda6): last unmount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:25:42.513891 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 01:25:42.534332 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 01:25:42.540989 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:25:42.562917 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:25:42.597050 systemd-networkd[877]: lo: Link UP Dec 13 01:25:42.597062 systemd-networkd[877]: lo: Gained carrier Dec 13 01:25:42.598657 systemd-networkd[877]: Enumeration completed Dec 13 01:25:42.598744 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:25:42.607732 systemd[1]: Reached target network.target - Network. Dec 13 01:25:42.611681 systemd-networkd[877]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:25:42.611684 systemd-networkd[877]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:25:42.752180 kernel: mlx5_core 1df8:00:02.0 enP7672s1: Link up Dec 13 01:25:42.790176 kernel: hv_netvsc 000d3af5-8f33-000d-3af5-8f33000d3af5 eth0: Data path switched to VF: enP7672s1 Dec 13 01:25:42.790722 systemd-networkd[877]: enP7672s1: Link UP Dec 13 01:25:42.790808 systemd-networkd[877]: eth0: Link UP Dec 13 01:25:42.790902 systemd-networkd[877]: eth0: Gained carrier Dec 13 01:25:42.790910 systemd-networkd[877]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:25:42.802378 systemd-networkd[877]: enP7672s1: Gained carrier Dec 13 01:25:42.826202 systemd-networkd[877]: eth0: DHCPv4 address 10.200.20.34/24, gateway 10.200.20.1 acquired from 168.63.129.16 Dec 13 01:25:43.497464 ignition[875]: Ignition 2.19.0 Dec 13 01:25:43.497475 ignition[875]: Stage: fetch-offline Dec 13 01:25:43.499650 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:25:43.497509 ignition[875]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:25:43.497517 ignition[875]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:25:43.497610 ignition[875]: parsed url from cmdline: "" Dec 13 01:25:43.523430 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 13 01:25:43.497614 ignition[875]: no config URL provided Dec 13 01:25:43.497619 ignition[875]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:25:43.497625 ignition[875]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:25:43.497630 ignition[875]: failed to fetch config: resource requires networking Dec 13 01:25:43.497794 ignition[875]: Ignition finished successfully Dec 13 01:25:43.556290 ignition[887]: Ignition 2.19.0 Dec 13 01:25:43.556296 ignition[887]: Stage: fetch Dec 13 01:25:43.556484 ignition[887]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:25:43.556493 ignition[887]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:25:43.556574 ignition[887]: parsed url from cmdline: "" Dec 13 01:25:43.556577 ignition[887]: no config URL provided Dec 13 01:25:43.556581 ignition[887]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:25:43.556588 ignition[887]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:25:43.556607 ignition[887]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Dec 13 01:25:43.720241 ignition[887]: GET result: OK Dec 13 01:25:43.720334 ignition[887]: config has been read from IMDS userdata Dec 13 01:25:43.720380 ignition[887]: parsing config with SHA512: 54672c9435a554adfbbc9d9ed71ec093b4f965625db4641fb5bd280702690dadd289f67d1b5e0b8fbd9fbabf374c00e3352edaf43330bbcc2b495b16841a5d3f Dec 13 01:25:43.724057 unknown[887]: fetched base config from "system" Dec 13 01:25:43.724476 ignition[887]: fetch: fetch complete Dec 13 01:25:43.724063 unknown[887]: fetched base config from "system" Dec 13 01:25:43.724479 ignition[887]: fetch: fetch passed Dec 13 01:25:43.724068 unknown[887]: fetched user config from "azure" Dec 13 01:25:43.724520 ignition[887]: Ignition finished successfully Dec 13 01:25:43.729817 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 13 01:25:43.765828 ignition[894]: Ignition 2.19.0 Dec 13 01:25:43.744390 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 01:25:43.765835 ignition[894]: Stage: kargs Dec 13 01:25:43.770479 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 01:25:43.766072 ignition[894]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:25:43.794332 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 01:25:43.766082 ignition[894]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:25:43.767429 ignition[894]: kargs: kargs passed Dec 13 01:25:43.767489 ignition[894]: Ignition finished successfully Dec 13 01:25:43.826013 ignition[901]: Ignition 2.19.0 Dec 13 01:25:43.826020 ignition[901]: Stage: disks Dec 13 01:25:43.830073 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 01:25:43.826268 ignition[901]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:25:43.837515 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 01:25:43.826279 ignition[901]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:25:43.846241 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:25:43.827528 ignition[901]: disks: disks passed Dec 13 01:25:43.857985 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:25:43.827581 ignition[901]: Ignition finished successfully Dec 13 01:25:43.868345 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:25:43.879868 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:25:43.906408 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 01:25:43.976315 systemd-fsck[909]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Dec 13 01:25:43.985205 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 01:25:44.000426 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 01:25:44.060330 kernel: EXT4-fs (sda9): mounted filesystem 32632247-db8d-4541-89c0-6f68c7fa7ee3 r/w with ordered data mode. Quota mode: none. Dec 13 01:25:44.060846 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 01:25:44.066478 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 01:25:44.105240 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:25:44.133541 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (920) Dec 13 01:25:44.133585 kernel: BTRFS info (device sda6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:25:44.134256 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 01:25:44.154272 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:25:44.154296 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:25:44.154306 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 01:25:44.159000 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Dec 13 01:25:44.172898 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 01:25:44.172929 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:25:44.181492 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:25:44.197730 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 01:25:44.224672 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 01:25:44.265323 systemd-networkd[877]: eth0: Gained IPv6LL Dec 13 01:25:44.622718 coreos-metadata[937]: Dec 13 01:25:44.622 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 13 01:25:44.632733 coreos-metadata[937]: Dec 13 01:25:44.632 INFO Fetch successful Dec 13 01:25:44.632733 coreos-metadata[937]: Dec 13 01:25:44.632 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Dec 13 01:25:44.649861 coreos-metadata[937]: Dec 13 01:25:44.649 INFO Fetch successful Dec 13 01:25:44.657293 coreos-metadata[937]: Dec 13 01:25:44.649 INFO wrote hostname ci-4081.2.1-a-a2790899e3 to /sysroot/etc/hostname Dec 13 01:25:44.656182 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 01:25:44.713325 systemd-networkd[877]: enP7672s1: Gained IPv6LL Dec 13 01:25:44.869431 initrd-setup-root[949]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 01:25:44.878831 initrd-setup-root[956]: cut: /sysroot/etc/group: No such file or directory Dec 13 01:25:44.887868 initrd-setup-root[963]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 01:25:44.913800 initrd-setup-root[970]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 01:25:45.490463 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 01:25:45.507319 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 01:25:45.530207 kernel: BTRFS info (device sda6): last unmount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:25:45.530461 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 01:25:45.539532 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 01:25:45.561021 ignition[1037]: INFO : Ignition 2.19.0 Dec 13 01:25:45.561021 ignition[1037]: INFO : Stage: mount Dec 13 01:25:45.570498 ignition[1037]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:25:45.570498 ignition[1037]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:25:45.570498 ignition[1037]: INFO : mount: mount passed Dec 13 01:25:45.570498 ignition[1037]: INFO : Ignition finished successfully Dec 13 01:25:45.570408 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 01:25:45.589437 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 01:25:45.601192 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 01:25:45.631522 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:25:45.660791 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1050) Dec 13 01:25:45.660831 kernel: BTRFS info (device sda6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:25:45.667464 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:25:45.672025 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:25:45.679184 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 01:25:45.680191 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:25:45.709039 ignition[1067]: INFO : Ignition 2.19.0 Dec 13 01:25:45.709039 ignition[1067]: INFO : Stage: files Dec 13 01:25:45.709039 ignition[1067]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:25:45.709039 ignition[1067]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:25:45.709039 ignition[1067]: DEBUG : files: compiled without relabeling support, skipping Dec 13 01:25:45.755677 ignition[1067]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 01:25:45.755677 ignition[1067]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 01:25:45.829690 ignition[1067]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 01:25:45.838828 ignition[1067]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 01:25:45.838828 ignition[1067]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 01:25:45.830075 unknown[1067]: wrote ssh authorized keys file for user: core Dec 13 01:25:45.861446 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 01:25:45.861446 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 01:25:45.861446 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 01:25:45.861446 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Dec 13 01:25:46.001689 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 13 01:25:46.129088 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 01:25:46.129088 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 13 01:25:46.153020 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 01:25:46.153020 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:25:46.153020 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:25:46.153020 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:25:46.153020 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:25:46.153020 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:25:46.153020 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:25:46.153020 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:25:46.153020 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:25:46.153020 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 01:25:46.153020 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 01:25:46.153020 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 01:25:46.153020 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Dec 13 01:25:46.460592 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 13 01:25:47.087149 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 01:25:47.087149 ignition[1067]: INFO : files: op(c): [started] processing unit "containerd.service" Dec 13 01:25:47.116277 ignition[1067]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 01:25:47.130337 ignition[1067]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 01:25:47.130337 ignition[1067]: INFO : files: op(c): [finished] processing unit "containerd.service" Dec 13 01:25:47.130337 ignition[1067]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Dec 13 01:25:47.130337 ignition[1067]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:25:47.130337 ignition[1067]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:25:47.130337 ignition[1067]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Dec 13 01:25:47.130337 ignition[1067]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Dec 13 01:25:47.130337 ignition[1067]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 01:25:47.231327 ignition[1067]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:25:47.231327 ignition[1067]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:25:47.231327 ignition[1067]: INFO : files: files passed Dec 13 01:25:47.231327 ignition[1067]: INFO : Ignition finished successfully Dec 13 01:25:47.145611 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 01:25:47.179459 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 01:25:47.189325 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 01:25:47.302324 initrd-setup-root-after-ignition[1096]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:25:47.302324 initrd-setup-root-after-ignition[1096]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:25:47.244082 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 01:25:47.340457 initrd-setup-root-after-ignition[1100]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:25:47.244184 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 01:25:47.287864 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:25:47.295594 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 01:25:47.327517 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 01:25:47.358885 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 01:25:47.359015 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 01:25:47.372821 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 01:25:47.383709 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 01:25:47.395814 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 01:25:47.413394 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 01:25:47.450366 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:25:47.470389 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 01:25:47.489988 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 01:25:47.490102 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 01:25:47.502113 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:25:47.514545 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:25:47.529052 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 01:25:47.541627 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 01:25:47.541694 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:25:47.558471 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 01:25:47.564072 systemd[1]: Stopped target basic.target - Basic System. Dec 13 01:25:47.575228 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 01:25:47.587606 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:25:47.599689 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 01:25:47.613006 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 01:25:47.625763 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:25:47.640034 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 01:25:47.652209 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 01:25:47.666510 systemd[1]: Stopped target swap.target - Swaps. Dec 13 01:25:47.677401 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 01:25:47.677480 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:25:47.694432 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:25:47.705981 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:25:47.718556 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 01:25:47.718603 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:25:47.731146 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 01:25:47.731221 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 01:25:47.750066 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 01:25:47.750135 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:25:47.764623 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 01:25:47.764674 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 01:25:47.776470 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 13 01:25:47.776514 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 01:25:47.849156 ignition[1121]: INFO : Ignition 2.19.0 Dec 13 01:25:47.849156 ignition[1121]: INFO : Stage: umount Dec 13 01:25:47.849156 ignition[1121]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:25:47.849156 ignition[1121]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:25:47.849156 ignition[1121]: INFO : umount: umount passed Dec 13 01:25:47.849156 ignition[1121]: INFO : Ignition finished successfully Dec 13 01:25:47.812385 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 01:25:47.831028 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 01:25:47.831103 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:25:47.851345 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 01:25:47.856962 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 01:25:47.857026 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:25:47.864712 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 01:25:47.864763 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:25:47.889991 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 01:25:47.890079 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 01:25:47.904379 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 01:25:47.904435 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 01:25:47.915108 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 01:25:47.915156 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 01:25:47.926866 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 01:25:47.926916 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 13 01:25:47.935481 systemd[1]: Stopped target network.target - Network. Dec 13 01:25:47.945676 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 01:25:47.945741 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:25:47.953605 systemd[1]: Stopped target paths.target - Path Units. Dec 13 01:25:47.966117 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 01:25:47.973257 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:25:47.981706 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 01:25:47.992491 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 01:25:48.003003 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 01:25:48.003054 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:25:48.015233 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 01:25:48.015276 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:25:48.028555 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 01:25:48.028618 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 01:25:48.042685 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 01:25:48.289995 kernel: hv_netvsc 000d3af5-8f33-000d-3af5-8f33000d3af5 eth0: Data path switched from VF: enP7672s1 Dec 13 01:25:48.042735 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 01:25:48.056885 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 01:25:48.066333 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 01:25:48.078206 systemd-networkd[877]: eth0: DHCPv6 lease lost Dec 13 01:25:48.079750 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 01:25:48.086373 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 01:25:48.086595 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 01:25:48.100684 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 01:25:48.100793 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 01:25:48.114696 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 01:25:48.114756 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:25:48.136492 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 01:25:48.145831 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 01:25:48.145898 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:25:48.157586 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:25:48.157646 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:25:48.168945 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 01:25:48.168997 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 01:25:48.183968 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 01:25:48.184019 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:25:48.195902 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:25:48.229256 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 01:25:48.229450 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:25:48.244355 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 01:25:48.244402 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 01:25:48.256645 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 01:25:48.256686 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:25:48.276488 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 01:25:48.276543 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:25:48.290025 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 01:25:48.290076 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 01:25:48.301460 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:25:48.301510 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:25:48.341362 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 01:25:48.355284 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 01:25:48.355352 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:25:48.369149 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:25:48.369204 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:25:48.381299 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 01:25:48.381378 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 01:25:48.400846 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 01:25:48.400988 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 01:25:48.504898 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 01:25:48.505029 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 01:25:48.513477 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 01:25:48.525121 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 01:25:48.525186 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 01:25:48.563393 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 01:25:48.641555 systemd[1]: Switching root. Dec 13 01:25:48.664355 systemd-journald[217]: Journal stopped Dec 13 01:25:38.339759 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Dec 13 01:25:38.339781 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu Dec 12 23:24:21 -00 2024 Dec 13 01:25:38.339789 kernel: KASLR enabled Dec 13 01:25:38.339795 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Dec 13 01:25:38.339802 kernel: printk: bootconsole [pl11] enabled Dec 13 01:25:38.339808 kernel: efi: EFI v2.7 by EDK II Dec 13 01:25:38.339815 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f214018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Dec 13 01:25:38.339821 kernel: random: crng init done Dec 13 01:25:38.339827 kernel: ACPI: Early table checksum verification disabled Dec 13 01:25:38.339833 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Dec 13 01:25:38.339839 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:25:38.339846 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:25:38.339853 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Dec 13 01:25:38.339859 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:25:38.339867 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:25:38.339873 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:25:38.339879 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:25:38.339887 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:25:38.339894 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:25:38.339900 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Dec 13 01:25:38.339907 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:25:38.339913 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Dec 13 01:25:38.339919 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Dec 13 01:25:38.339926 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Dec 13 01:25:38.339932 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Dec 13 01:25:38.339939 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Dec 13 01:25:38.339945 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Dec 13 01:25:38.339951 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Dec 13 01:25:38.339959 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Dec 13 01:25:38.339966 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Dec 13 01:25:38.339972 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Dec 13 01:25:38.339978 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Dec 13 01:25:38.339985 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Dec 13 01:25:38.339991 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Dec 13 01:25:38.339998 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Dec 13 01:25:38.340018 kernel: Zone ranges: Dec 13 01:25:38.340026 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Dec 13 01:25:38.340032 kernel: DMA32 empty Dec 13 01:25:38.340039 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Dec 13 01:25:38.340045 kernel: Movable zone start for each node Dec 13 01:25:38.340056 kernel: Early memory node ranges Dec 13 01:25:38.340063 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Dec 13 01:25:38.340070 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Dec 13 01:25:38.340076 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Dec 13 01:25:38.340083 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Dec 13 01:25:38.340091 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Dec 13 01:25:38.340098 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Dec 13 01:25:38.340105 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Dec 13 01:25:38.340112 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Dec 13 01:25:38.340119 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Dec 13 01:25:38.340126 kernel: psci: probing for conduit method from ACPI. Dec 13 01:25:38.340132 kernel: psci: PSCIv1.1 detected in firmware. Dec 13 01:25:38.340139 kernel: psci: Using standard PSCI v0.2 function IDs Dec 13 01:25:38.340146 kernel: psci: MIGRATE_INFO_TYPE not supported. Dec 13 01:25:38.340153 kernel: psci: SMC Calling Convention v1.4 Dec 13 01:25:38.340159 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Dec 13 01:25:38.340166 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Dec 13 01:25:38.340175 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Dec 13 01:25:38.340181 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Dec 13 01:25:38.340188 kernel: pcpu-alloc: [0] 0 [0] 1 Dec 13 01:25:38.340195 kernel: Detected PIPT I-cache on CPU0 Dec 13 01:25:38.340202 kernel: CPU features: detected: GIC system register CPU interface Dec 13 01:25:38.340209 kernel: CPU features: detected: Hardware dirty bit management Dec 13 01:25:38.340215 kernel: CPU features: detected: Spectre-BHB Dec 13 01:25:38.340222 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 13 01:25:38.340229 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 13 01:25:38.340236 kernel: CPU features: detected: ARM erratum 1418040 Dec 13 01:25:38.340243 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Dec 13 01:25:38.340251 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 13 01:25:38.340258 kernel: alternatives: applying boot alternatives Dec 13 01:25:38.340266 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=9494f75a68cfbdce95d0d2f9b58d6d75bc38ee5b4e31dfc2a6da695ffafefba6 Dec 13 01:25:38.340273 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 01:25:38.340280 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 01:25:38.340287 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 01:25:38.340294 kernel: Fallback order for Node 0: 0 Dec 13 01:25:38.340301 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Dec 13 01:25:38.340307 kernel: Policy zone: Normal Dec 13 01:25:38.340314 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 01:25:38.340321 kernel: software IO TLB: area num 2. Dec 13 01:25:38.340329 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Dec 13 01:25:38.340336 kernel: Memory: 3982756K/4194160K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39360K init, 897K bss, 211404K reserved, 0K cma-reserved) Dec 13 01:25:38.340343 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 01:25:38.340350 kernel: trace event string verifier disabled Dec 13 01:25:38.340357 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 01:25:38.340364 kernel: rcu: RCU event tracing is enabled. Dec 13 01:25:38.340372 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 01:25:38.340379 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 01:25:38.340385 kernel: Tracing variant of Tasks RCU enabled. Dec 13 01:25:38.340392 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 01:25:38.340399 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 01:25:38.340407 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 13 01:25:38.340414 kernel: GICv3: 960 SPIs implemented Dec 13 01:25:38.340421 kernel: GICv3: 0 Extended SPIs implemented Dec 13 01:25:38.340428 kernel: Root IRQ handler: gic_handle_irq Dec 13 01:25:38.340435 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Dec 13 01:25:38.340442 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Dec 13 01:25:38.340449 kernel: ITS: No ITS available, not enabling LPIs Dec 13 01:25:38.340456 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 01:25:38.340462 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 01:25:38.340469 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Dec 13 01:25:38.340477 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Dec 13 01:25:38.340484 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Dec 13 01:25:38.340492 kernel: Console: colour dummy device 80x25 Dec 13 01:25:38.340499 kernel: printk: console [tty1] enabled Dec 13 01:25:38.340506 kernel: ACPI: Core revision 20230628 Dec 13 01:25:38.340513 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Dec 13 01:25:38.340520 kernel: pid_max: default: 32768 minimum: 301 Dec 13 01:25:38.340527 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 01:25:38.340534 kernel: landlock: Up and running. Dec 13 01:25:38.340541 kernel: SELinux: Initializing. Dec 13 01:25:38.340548 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:25:38.340557 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:25:38.340564 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:25:38.340571 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:25:38.340578 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Dec 13 01:25:38.340585 kernel: Hyper-V: Host Build 10.0.22477.1594-1-0 Dec 13 01:25:38.340592 kernel: Hyper-V: enabling crash_kexec_post_notifiers Dec 13 01:25:38.340599 kernel: rcu: Hierarchical SRCU implementation. Dec 13 01:25:38.340612 kernel: rcu: Max phase no-delay instances is 400. Dec 13 01:25:38.340619 kernel: Remapping and enabling EFI services. Dec 13 01:25:38.340627 kernel: smp: Bringing up secondary CPUs ... Dec 13 01:25:38.340634 kernel: Detected PIPT I-cache on CPU1 Dec 13 01:25:38.340642 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Dec 13 01:25:38.340650 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 01:25:38.340657 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Dec 13 01:25:38.340665 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 01:25:38.340672 kernel: SMP: Total of 2 processors activated. Dec 13 01:25:38.340679 kernel: CPU features: detected: 32-bit EL0 Support Dec 13 01:25:38.340689 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Dec 13 01:25:38.340696 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 13 01:25:38.340703 kernel: CPU features: detected: CRC32 instructions Dec 13 01:25:38.340711 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 13 01:25:38.340718 kernel: CPU features: detected: LSE atomic instructions Dec 13 01:25:38.340725 kernel: CPU features: detected: Privileged Access Never Dec 13 01:25:38.340733 kernel: CPU: All CPU(s) started at EL1 Dec 13 01:25:38.340740 kernel: alternatives: applying system-wide alternatives Dec 13 01:25:38.340747 kernel: devtmpfs: initialized Dec 13 01:25:38.340756 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 01:25:38.340764 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 01:25:38.340771 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 01:25:38.340778 kernel: SMBIOS 3.1.0 present. Dec 13 01:25:38.340785 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Dec 13 01:25:38.340793 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 01:25:38.340800 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 13 01:25:38.340808 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 13 01:25:38.340817 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 13 01:25:38.340824 kernel: audit: initializing netlink subsys (disabled) Dec 13 01:25:38.340832 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Dec 13 01:25:38.340839 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 01:25:38.340846 kernel: cpuidle: using governor menu Dec 13 01:25:38.340854 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 13 01:25:38.340861 kernel: ASID allocator initialised with 32768 entries Dec 13 01:25:38.340868 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 01:25:38.340876 kernel: Serial: AMBA PL011 UART driver Dec 13 01:25:38.340884 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Dec 13 01:25:38.340892 kernel: Modules: 0 pages in range for non-PLT usage Dec 13 01:25:38.340899 kernel: Modules: 509040 pages in range for PLT usage Dec 13 01:25:38.340907 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 01:25:38.340914 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 01:25:38.340921 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 13 01:25:38.340929 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 13 01:25:38.340936 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 01:25:38.340943 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 01:25:38.340952 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 13 01:25:38.340959 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 13 01:25:38.340967 kernel: ACPI: Added _OSI(Module Device) Dec 13 01:25:38.340974 kernel: ACPI: Added _OSI(Processor Device) Dec 13 01:25:38.340981 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 01:25:38.340989 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 01:25:38.340996 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 01:25:38.345981 kernel: ACPI: Interpreter enabled Dec 13 01:25:38.346011 kernel: ACPI: Using GIC for interrupt routing Dec 13 01:25:38.346021 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Dec 13 01:25:38.346033 kernel: printk: console [ttyAMA0] enabled Dec 13 01:25:38.346041 kernel: printk: bootconsole [pl11] disabled Dec 13 01:25:38.346048 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Dec 13 01:25:38.346056 kernel: iommu: Default domain type: Translated Dec 13 01:25:38.346063 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 13 01:25:38.346071 kernel: efivars: Registered efivars operations Dec 13 01:25:38.346078 kernel: vgaarb: loaded Dec 13 01:25:38.346086 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 13 01:25:38.346093 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 01:25:38.346103 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 01:25:38.346111 kernel: pnp: PnP ACPI init Dec 13 01:25:38.346118 kernel: pnp: PnP ACPI: found 0 devices Dec 13 01:25:38.346125 kernel: NET: Registered PF_INET protocol family Dec 13 01:25:38.346133 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 01:25:38.346141 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 01:25:38.346148 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 01:25:38.346156 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 01:25:38.346165 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 13 01:25:38.346172 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 01:25:38.346180 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:25:38.346187 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:25:38.346195 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 01:25:38.346202 kernel: PCI: CLS 0 bytes, default 64 Dec 13 01:25:38.346209 kernel: kvm [1]: HYP mode not available Dec 13 01:25:38.346217 kernel: Initialise system trusted keyrings Dec 13 01:25:38.346224 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 01:25:38.346233 kernel: Key type asymmetric registered Dec 13 01:25:38.346240 kernel: Asymmetric key parser 'x509' registered Dec 13 01:25:38.346247 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 13 01:25:38.346255 kernel: io scheduler mq-deadline registered Dec 13 01:25:38.346262 kernel: io scheduler kyber registered Dec 13 01:25:38.346269 kernel: io scheduler bfq registered Dec 13 01:25:38.346277 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 01:25:38.346285 kernel: thunder_xcv, ver 1.0 Dec 13 01:25:38.346292 kernel: thunder_bgx, ver 1.0 Dec 13 01:25:38.346299 kernel: nicpf, ver 1.0 Dec 13 01:25:38.346308 kernel: nicvf, ver 1.0 Dec 13 01:25:38.346442 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 13 01:25:38.346515 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-12-13T01:25:37 UTC (1734053137) Dec 13 01:25:38.346526 kernel: efifb: probing for efifb Dec 13 01:25:38.346533 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Dec 13 01:25:38.346541 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Dec 13 01:25:38.346548 kernel: efifb: scrolling: redraw Dec 13 01:25:38.346558 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Dec 13 01:25:38.346565 kernel: Console: switching to colour frame buffer device 128x48 Dec 13 01:25:38.346573 kernel: fb0: EFI VGA frame buffer device Dec 13 01:25:38.346581 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Dec 13 01:25:38.346588 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 01:25:38.346595 kernel: No ACPI PMU IRQ for CPU0 Dec 13 01:25:38.346603 kernel: No ACPI PMU IRQ for CPU1 Dec 13 01:25:38.346610 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Dec 13 01:25:38.346618 kernel: watchdog: Delayed init of the lockup detector failed: -19 Dec 13 01:25:38.346627 kernel: watchdog: Hard watchdog permanently disabled Dec 13 01:25:38.346634 kernel: NET: Registered PF_INET6 protocol family Dec 13 01:25:38.346641 kernel: Segment Routing with IPv6 Dec 13 01:25:38.346649 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 01:25:38.346656 kernel: NET: Registered PF_PACKET protocol family Dec 13 01:25:38.346663 kernel: Key type dns_resolver registered Dec 13 01:25:38.346671 kernel: registered taskstats version 1 Dec 13 01:25:38.346678 kernel: Loading compiled-in X.509 certificates Dec 13 01:25:38.346685 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: d83da9ddb9e3c2439731828371f21d0232fd9ffb' Dec 13 01:25:38.346693 kernel: Key type .fscrypt registered Dec 13 01:25:38.346701 kernel: Key type fscrypt-provisioning registered Dec 13 01:25:38.346709 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 01:25:38.346716 kernel: ima: Allocated hash algorithm: sha1 Dec 13 01:25:38.346724 kernel: ima: No architecture policies found Dec 13 01:25:38.346731 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 13 01:25:38.346738 kernel: clk: Disabling unused clocks Dec 13 01:25:38.346746 kernel: Freeing unused kernel memory: 39360K Dec 13 01:25:38.346753 kernel: Run /init as init process Dec 13 01:25:38.346762 kernel: with arguments: Dec 13 01:25:38.346769 kernel: /init Dec 13 01:25:38.346776 kernel: with environment: Dec 13 01:25:38.346783 kernel: HOME=/ Dec 13 01:25:38.346791 kernel: TERM=linux Dec 13 01:25:38.346798 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 01:25:38.346807 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:25:38.346817 systemd[1]: Detected virtualization microsoft. Dec 13 01:25:38.346826 systemd[1]: Detected architecture arm64. Dec 13 01:25:38.346834 systemd[1]: Running in initrd. Dec 13 01:25:38.346841 systemd[1]: No hostname configured, using default hostname. Dec 13 01:25:38.346849 systemd[1]: Hostname set to . Dec 13 01:25:38.346857 systemd[1]: Initializing machine ID from random generator. Dec 13 01:25:38.346865 systemd[1]: Queued start job for default target initrd.target. Dec 13 01:25:38.346872 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:25:38.346880 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:25:38.346891 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 01:25:38.346899 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:25:38.346907 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 01:25:38.346915 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 01:25:38.346924 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 01:25:38.346933 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 01:25:38.346941 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:25:38.346950 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:25:38.346958 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:25:38.346966 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:25:38.346974 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:25:38.346982 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:25:38.346990 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:25:38.346998 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:25:38.347024 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:25:38.347036 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:25:38.347045 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:25:38.347053 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:25:38.347061 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:25:38.347069 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:25:38.347077 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 01:25:38.347085 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:25:38.347093 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 01:25:38.347101 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 01:25:38.347110 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:25:38.347118 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:25:38.347144 systemd-journald[217]: Collecting audit messages is disabled. Dec 13 01:25:38.347163 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:25:38.347174 systemd-journald[217]: Journal started Dec 13 01:25:38.347192 systemd-journald[217]: Runtime Journal (/run/log/journal/2528435aa5124a1f8efd8e5ac8383260) is 8.0M, max 78.5M, 70.5M free. Dec 13 01:25:38.341037 systemd-modules-load[218]: Inserted module 'overlay' Dec 13 01:25:38.362646 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:25:38.368943 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 01:25:38.400752 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 01:25:38.400777 kernel: Bridge firewalling registered Dec 13 01:25:38.392679 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:25:38.397558 systemd-modules-load[218]: Inserted module 'br_netfilter' Dec 13 01:25:38.411038 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 01:25:38.421186 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:25:38.433523 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:25:38.460318 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:25:38.475657 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:25:38.490835 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:25:38.502197 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:25:38.531787 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:25:38.538484 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:25:38.551917 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:25:38.580245 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:25:38.591182 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:25:38.623191 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:25:38.633371 systemd-resolved[250]: Positive Trust Anchors: Dec 13 01:25:38.633381 systemd-resolved[250]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:25:38.633414 systemd-resolved[250]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:25:38.635619 systemd-resolved[250]: Defaulting to hostname 'linux'. Dec 13 01:25:38.636959 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:25:38.653037 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:25:38.662785 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:25:38.722152 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 01:25:38.757808 dracut-cmdline[255]: dracut-dracut-053 Dec 13 01:25:38.762645 dracut-cmdline[255]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=9494f75a68cfbdce95d0d2f9b58d6d75bc38ee5b4e31dfc2a6da695ffafefba6 Dec 13 01:25:38.865095 kernel: SCSI subsystem initialized Dec 13 01:25:38.873021 kernel: Loading iSCSI transport class v2.0-870. Dec 13 01:25:38.884040 kernel: iscsi: registered transport (tcp) Dec 13 01:25:38.902022 kernel: iscsi: registered transport (qla4xxx) Dec 13 01:25:38.902054 kernel: QLogic iSCSI HBA Driver Dec 13 01:25:38.936978 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 01:25:38.954123 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 01:25:38.991027 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 01:25:38.991085 kernel: device-mapper: uevent: version 1.0.3 Dec 13 01:25:38.991102 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 01:25:39.048043 kernel: raid6: neonx8 gen() 15771 MB/s Dec 13 01:25:39.066019 kernel: raid6: neonx4 gen() 15670 MB/s Dec 13 01:25:39.086018 kernel: raid6: neonx2 gen() 13186 MB/s Dec 13 01:25:39.107020 kernel: raid6: neonx1 gen() 10480 MB/s Dec 13 01:25:39.127015 kernel: raid6: int64x8 gen() 6962 MB/s Dec 13 01:25:39.147018 kernel: raid6: int64x4 gen() 7359 MB/s Dec 13 01:25:39.168016 kernel: raid6: int64x2 gen() 6127 MB/s Dec 13 01:25:39.191941 kernel: raid6: int64x1 gen() 5061 MB/s Dec 13 01:25:39.191960 kernel: raid6: using algorithm neonx8 gen() 15771 MB/s Dec 13 01:25:39.217344 kernel: raid6: .... xor() 11940 MB/s, rmw enabled Dec 13 01:25:39.217368 kernel: raid6: using neon recovery algorithm Dec 13 01:25:39.228651 kernel: xor: measuring software checksum speed Dec 13 01:25:39.228678 kernel: 8regs : 19793 MB/sec Dec 13 01:25:39.235894 kernel: 32regs : 18378 MB/sec Dec 13 01:25:39.235916 kernel: arm64_neon : 27007 MB/sec Dec 13 01:25:39.240426 kernel: xor: using function: arm64_neon (27007 MB/sec) Dec 13 01:25:39.291024 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 01:25:39.302055 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:25:39.318159 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:25:39.340618 systemd-udevd[436]: Using default interface naming scheme 'v255'. Dec 13 01:25:39.346455 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:25:39.369261 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 01:25:39.387362 dracut-pre-trigger[456]: rd.md=0: removing MD RAID activation Dec 13 01:25:39.416984 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:25:39.433217 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:25:39.467234 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:25:39.491960 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 01:25:39.514609 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 01:25:39.529623 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:25:39.537524 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:25:39.559694 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:25:39.593049 kernel: hv_vmbus: Vmbus version:5.3 Dec 13 01:25:39.595123 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 01:25:39.614394 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:25:39.667339 kernel: hv_vmbus: registering driver hyperv_keyboard Dec 13 01:25:39.667368 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 01:25:39.667379 kernel: hv_vmbus: registering driver hv_netvsc Dec 13 01:25:39.667389 kernel: hv_vmbus: registering driver hid_hyperv Dec 13 01:25:39.667398 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Dec 13 01:25:39.667408 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Dec 13 01:25:39.667569 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 01:25:39.631174 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:25:39.631314 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:25:39.702871 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Dec 13 01:25:39.692002 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:25:39.683582 kernel: hv_vmbus: registering driver hv_storvsc Dec 13 01:25:39.709429 kernel: PTP clock support registered Dec 13 01:25:39.709446 kernel: hv_utils: Registering HyperV Utility Driver Dec 13 01:25:39.709454 kernel: hv_vmbus: registering driver hv_utils Dec 13 01:25:39.709464 kernel: hv_utils: Heartbeat IC version 3.0 Dec 13 01:25:39.709472 kernel: hv_utils: Shutdown IC version 3.2 Dec 13 01:25:39.709481 kernel: hv_utils: TimeSync IC version 4.0 Dec 13 01:25:39.709489 kernel: scsi host0: storvsc_host_t Dec 13 01:25:39.709629 kernel: scsi host1: storvsc_host_t Dec 13 01:25:39.709720 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Dec 13 01:25:39.709741 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Dec 13 01:25:39.709756 systemd-journald[217]: Time jumped backwards, rotating. Dec 13 01:25:39.726686 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:25:39.730025 kernel: hv_netvsc 000d3af5-8f33-000d-3af5-8f33000d3af5 eth0: VF slot 1 added Dec 13 01:25:39.726936 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:25:39.661403 systemd-resolved[250]: Clock change detected. Flushing caches. Dec 13 01:25:39.702758 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:25:39.728383 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:25:39.782206 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Dec 13 01:25:39.809529 kernel: hv_vmbus: registering driver hv_pci Dec 13 01:25:39.809547 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 01:25:39.809556 kernel: hv_pci ed800687-1df8-4250-b00e-c4aba553e961: PCI VMBus probing: Using version 0x10004 Dec 13 01:25:39.892347 kernel: hv_pci ed800687-1df8-4250-b00e-c4aba553e961: PCI host bridge to bus 1df8:00 Dec 13 01:25:39.892488 kernel: pci_bus 1df8:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Dec 13 01:25:39.892601 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Dec 13 01:25:39.892708 kernel: pci_bus 1df8:00: No busn resource found for root bus, will use [bus 00-ff] Dec 13 01:25:39.892790 kernel: pci 1df8:00:02.0: [15b3:1018] type 00 class 0x020000 Dec 13 01:25:39.892891 kernel: pci 1df8:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Dec 13 01:25:39.892979 kernel: pci 1df8:00:02.0: enabling Extended Tags Dec 13 01:25:39.893066 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Dec 13 01:25:39.916994 kernel: pci 1df8:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 1df8:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Dec 13 01:25:39.917142 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Dec 13 01:25:39.917265 kernel: pci_bus 1df8:00: busn_res: [bus 00-ff] end is updated to 00 Dec 13 01:25:39.917360 kernel: sd 0:0:0:0: [sda] Write Protect is off Dec 13 01:25:39.917456 kernel: pci 1df8:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Dec 13 01:25:39.917550 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Dec 13 01:25:39.917641 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Dec 13 01:25:39.917732 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:25:39.917741 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Dec 13 01:25:39.783740 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:25:39.824400 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:25:39.900769 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:25:39.973844 kernel: mlx5_core 1df8:00:02.0: enabling device (0000 -> 0002) Dec 13 01:25:40.188422 kernel: mlx5_core 1df8:00:02.0: firmware version: 16.30.1284 Dec 13 01:25:40.188551 kernel: hv_netvsc 000d3af5-8f33-000d-3af5-8f33000d3af5 eth0: VF registering: eth1 Dec 13 01:25:40.188654 kernel: mlx5_core 1df8:00:02.0 eth1: joined to eth0 Dec 13 01:25:40.188786 kernel: mlx5_core 1df8:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Dec 13 01:25:40.196177 kernel: mlx5_core 1df8:00:02.0 enP7672s1: renamed from eth1 Dec 13 01:25:40.360674 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Dec 13 01:25:40.419193 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (504) Dec 13 01:25:40.430251 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Dec 13 01:25:40.444725 kernel: BTRFS: device fsid 2893cd1e-612b-4262-912c-10787dc9c881 devid 1 transid 46 /dev/sda3 scanned by (udev-worker) (507) Dec 13 01:25:40.456094 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Dec 13 01:25:40.468319 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Dec 13 01:25:40.476521 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Dec 13 01:25:40.501336 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 01:25:40.526182 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:25:40.533187 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:25:41.542192 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:25:41.543020 disk-uuid[607]: The operation has completed successfully. Dec 13 01:25:41.594938 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 01:25:41.597128 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 01:25:41.639351 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 01:25:41.653006 sh[693]: Success Dec 13 01:25:41.680311 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Dec 13 01:25:41.901295 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 01:25:41.918297 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 01:25:41.924504 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 01:25:41.967299 kernel: BTRFS info (device dm-0): first mount of filesystem 2893cd1e-612b-4262-912c-10787dc9c881 Dec 13 01:25:41.967366 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:25:41.975091 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 01:25:41.980118 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 01:25:41.984361 kernel: BTRFS info (device dm-0): using free space tree Dec 13 01:25:42.389351 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 01:25:42.395042 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 01:25:42.412419 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 01:25:42.421352 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 01:25:42.460485 kernel: BTRFS info (device sda6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:25:42.460534 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:25:42.465758 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:25:42.486224 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 01:25:42.502354 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 01:25:42.508565 kernel: BTRFS info (device sda6): last unmount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:25:42.513891 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 01:25:42.534332 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 01:25:42.540989 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:25:42.562917 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:25:42.597050 systemd-networkd[877]: lo: Link UP Dec 13 01:25:42.597062 systemd-networkd[877]: lo: Gained carrier Dec 13 01:25:42.598657 systemd-networkd[877]: Enumeration completed Dec 13 01:25:42.598744 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:25:42.607732 systemd[1]: Reached target network.target - Network. Dec 13 01:25:42.611681 systemd-networkd[877]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:25:42.611684 systemd-networkd[877]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:25:42.752180 kernel: mlx5_core 1df8:00:02.0 enP7672s1: Link up Dec 13 01:25:42.790176 kernel: hv_netvsc 000d3af5-8f33-000d-3af5-8f33000d3af5 eth0: Data path switched to VF: enP7672s1 Dec 13 01:25:42.790722 systemd-networkd[877]: enP7672s1: Link UP Dec 13 01:25:42.790808 systemd-networkd[877]: eth0: Link UP Dec 13 01:25:42.790902 systemd-networkd[877]: eth0: Gained carrier Dec 13 01:25:42.790910 systemd-networkd[877]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:25:42.802378 systemd-networkd[877]: enP7672s1: Gained carrier Dec 13 01:25:42.826202 systemd-networkd[877]: eth0: DHCPv4 address 10.200.20.34/24, gateway 10.200.20.1 acquired from 168.63.129.16 Dec 13 01:25:43.497464 ignition[875]: Ignition 2.19.0 Dec 13 01:25:43.497475 ignition[875]: Stage: fetch-offline Dec 13 01:25:43.499650 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:25:43.497509 ignition[875]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:25:43.497517 ignition[875]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:25:43.497610 ignition[875]: parsed url from cmdline: "" Dec 13 01:25:43.523430 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 13 01:25:43.497614 ignition[875]: no config URL provided Dec 13 01:25:43.497619 ignition[875]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:25:43.497625 ignition[875]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:25:43.497630 ignition[875]: failed to fetch config: resource requires networking Dec 13 01:25:43.497794 ignition[875]: Ignition finished successfully Dec 13 01:25:43.556290 ignition[887]: Ignition 2.19.0 Dec 13 01:25:43.556296 ignition[887]: Stage: fetch Dec 13 01:25:43.556484 ignition[887]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:25:43.556493 ignition[887]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:25:43.556574 ignition[887]: parsed url from cmdline: "" Dec 13 01:25:43.556577 ignition[887]: no config URL provided Dec 13 01:25:43.556581 ignition[887]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:25:43.556588 ignition[887]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:25:43.556607 ignition[887]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Dec 13 01:25:43.720241 ignition[887]: GET result: OK Dec 13 01:25:43.720334 ignition[887]: config has been read from IMDS userdata Dec 13 01:25:43.720380 ignition[887]: parsing config with SHA512: 54672c9435a554adfbbc9d9ed71ec093b4f965625db4641fb5bd280702690dadd289f67d1b5e0b8fbd9fbabf374c00e3352edaf43330bbcc2b495b16841a5d3f Dec 13 01:25:43.724057 unknown[887]: fetched base config from "system" Dec 13 01:25:43.724476 ignition[887]: fetch: fetch complete Dec 13 01:25:43.724063 unknown[887]: fetched base config from "system" Dec 13 01:25:43.724479 ignition[887]: fetch: fetch passed Dec 13 01:25:43.724068 unknown[887]: fetched user config from "azure" Dec 13 01:25:43.724520 ignition[887]: Ignition finished successfully Dec 13 01:25:43.729817 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 13 01:25:43.765828 ignition[894]: Ignition 2.19.0 Dec 13 01:25:43.744390 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 01:25:43.765835 ignition[894]: Stage: kargs Dec 13 01:25:43.770479 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 01:25:43.766072 ignition[894]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:25:43.794332 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 01:25:43.766082 ignition[894]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:25:43.767429 ignition[894]: kargs: kargs passed Dec 13 01:25:43.767489 ignition[894]: Ignition finished successfully Dec 13 01:25:43.826013 ignition[901]: Ignition 2.19.0 Dec 13 01:25:43.826020 ignition[901]: Stage: disks Dec 13 01:25:43.830073 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 01:25:43.826268 ignition[901]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:25:43.837515 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 01:25:43.826279 ignition[901]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:25:43.846241 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:25:43.827528 ignition[901]: disks: disks passed Dec 13 01:25:43.857985 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:25:43.827581 ignition[901]: Ignition finished successfully Dec 13 01:25:43.868345 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:25:43.879868 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:25:43.906408 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 01:25:43.976315 systemd-fsck[909]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Dec 13 01:25:43.985205 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 01:25:44.000426 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 01:25:44.060330 kernel: EXT4-fs (sda9): mounted filesystem 32632247-db8d-4541-89c0-6f68c7fa7ee3 r/w with ordered data mode. Quota mode: none. Dec 13 01:25:44.060846 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 01:25:44.066478 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 01:25:44.105240 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:25:44.133541 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (920) Dec 13 01:25:44.133585 kernel: BTRFS info (device sda6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:25:44.134256 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 01:25:44.154272 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:25:44.154296 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:25:44.154306 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 01:25:44.159000 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Dec 13 01:25:44.172898 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 01:25:44.172929 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:25:44.181492 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:25:44.197730 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 01:25:44.224672 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 01:25:44.265323 systemd-networkd[877]: eth0: Gained IPv6LL Dec 13 01:25:44.622718 coreos-metadata[937]: Dec 13 01:25:44.622 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 13 01:25:44.632733 coreos-metadata[937]: Dec 13 01:25:44.632 INFO Fetch successful Dec 13 01:25:44.632733 coreos-metadata[937]: Dec 13 01:25:44.632 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Dec 13 01:25:44.649861 coreos-metadata[937]: Dec 13 01:25:44.649 INFO Fetch successful Dec 13 01:25:44.657293 coreos-metadata[937]: Dec 13 01:25:44.649 INFO wrote hostname ci-4081.2.1-a-a2790899e3 to /sysroot/etc/hostname Dec 13 01:25:44.656182 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 01:25:44.713325 systemd-networkd[877]: enP7672s1: Gained IPv6LL Dec 13 01:25:44.869431 initrd-setup-root[949]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 01:25:44.878831 initrd-setup-root[956]: cut: /sysroot/etc/group: No such file or directory Dec 13 01:25:44.887868 initrd-setup-root[963]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 01:25:44.913800 initrd-setup-root[970]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 01:25:45.490463 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 01:25:45.507319 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 01:25:45.530207 kernel: BTRFS info (device sda6): last unmount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:25:45.530461 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 01:25:45.539532 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 01:25:45.561021 ignition[1037]: INFO : Ignition 2.19.0 Dec 13 01:25:45.561021 ignition[1037]: INFO : Stage: mount Dec 13 01:25:45.570498 ignition[1037]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:25:45.570498 ignition[1037]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:25:45.570498 ignition[1037]: INFO : mount: mount passed Dec 13 01:25:45.570498 ignition[1037]: INFO : Ignition finished successfully Dec 13 01:25:45.570408 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 01:25:45.589437 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 01:25:45.601192 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 01:25:45.631522 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:25:45.660791 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1050) Dec 13 01:25:45.660831 kernel: BTRFS info (device sda6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:25:45.667464 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:25:45.672025 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:25:45.679184 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 01:25:45.680191 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:25:45.709039 ignition[1067]: INFO : Ignition 2.19.0 Dec 13 01:25:45.709039 ignition[1067]: INFO : Stage: files Dec 13 01:25:45.709039 ignition[1067]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:25:45.709039 ignition[1067]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:25:45.709039 ignition[1067]: DEBUG : files: compiled without relabeling support, skipping Dec 13 01:25:45.755677 ignition[1067]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 01:25:45.755677 ignition[1067]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 01:25:45.829690 ignition[1067]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 01:25:45.838828 ignition[1067]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 01:25:45.838828 ignition[1067]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 01:25:45.830075 unknown[1067]: wrote ssh authorized keys file for user: core Dec 13 01:25:45.861446 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 01:25:45.861446 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 01:25:45.861446 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 01:25:45.861446 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Dec 13 01:25:46.001689 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 13 01:25:46.129088 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 01:25:46.129088 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 13 01:25:46.153020 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 01:25:46.153020 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:25:46.153020 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:25:46.153020 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:25:46.153020 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:25:46.153020 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:25:46.153020 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:25:46.153020 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:25:46.153020 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:25:46.153020 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 01:25:46.153020 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 01:25:46.153020 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 01:25:46.153020 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Dec 13 01:25:46.460592 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 13 01:25:47.087149 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 01:25:47.087149 ignition[1067]: INFO : files: op(c): [started] processing unit "containerd.service" Dec 13 01:25:47.116277 ignition[1067]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 01:25:47.130337 ignition[1067]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 01:25:47.130337 ignition[1067]: INFO : files: op(c): [finished] processing unit "containerd.service" Dec 13 01:25:47.130337 ignition[1067]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Dec 13 01:25:47.130337 ignition[1067]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:25:47.130337 ignition[1067]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:25:47.130337 ignition[1067]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Dec 13 01:25:47.130337 ignition[1067]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Dec 13 01:25:47.130337 ignition[1067]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 01:25:47.231327 ignition[1067]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:25:47.231327 ignition[1067]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:25:47.231327 ignition[1067]: INFO : files: files passed Dec 13 01:25:47.231327 ignition[1067]: INFO : Ignition finished successfully Dec 13 01:25:47.145611 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 01:25:47.179459 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 01:25:47.189325 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 01:25:47.302324 initrd-setup-root-after-ignition[1096]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:25:47.302324 initrd-setup-root-after-ignition[1096]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:25:47.244082 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 01:25:47.340457 initrd-setup-root-after-ignition[1100]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:25:47.244184 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 01:25:47.287864 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:25:47.295594 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 01:25:47.327517 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 01:25:47.358885 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 01:25:47.359015 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 01:25:47.372821 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 01:25:47.383709 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 01:25:47.395814 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 01:25:47.413394 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 01:25:47.450366 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:25:47.470389 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 01:25:47.489988 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 01:25:47.490102 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 01:25:47.502113 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:25:47.514545 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:25:47.529052 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 01:25:47.541627 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 01:25:47.541694 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:25:47.558471 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 01:25:47.564072 systemd[1]: Stopped target basic.target - Basic System. Dec 13 01:25:47.575228 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 01:25:47.587606 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:25:47.599689 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 01:25:47.613006 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 01:25:47.625763 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:25:47.640034 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 01:25:47.652209 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 01:25:47.666510 systemd[1]: Stopped target swap.target - Swaps. Dec 13 01:25:47.677401 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 01:25:47.677480 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:25:47.694432 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:25:47.705981 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:25:47.718556 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 01:25:47.718603 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:25:47.731146 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 01:25:47.731221 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 01:25:47.750066 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 01:25:47.750135 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:25:47.764623 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 01:25:47.764674 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 01:25:47.776470 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 13 01:25:47.776514 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 01:25:47.849156 ignition[1121]: INFO : Ignition 2.19.0 Dec 13 01:25:47.849156 ignition[1121]: INFO : Stage: umount Dec 13 01:25:47.849156 ignition[1121]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:25:47.849156 ignition[1121]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:25:47.849156 ignition[1121]: INFO : umount: umount passed Dec 13 01:25:47.849156 ignition[1121]: INFO : Ignition finished successfully Dec 13 01:25:47.812385 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 01:25:47.831028 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 01:25:47.831103 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:25:47.851345 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 01:25:47.856962 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 01:25:47.857026 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:25:47.864712 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 01:25:47.864763 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:25:47.889991 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 01:25:47.890079 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 01:25:47.904379 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 01:25:47.904435 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 01:25:47.915108 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 01:25:47.915156 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 01:25:47.926866 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 01:25:47.926916 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 13 01:25:47.935481 systemd[1]: Stopped target network.target - Network. Dec 13 01:25:47.945676 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 01:25:47.945741 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:25:47.953605 systemd[1]: Stopped target paths.target - Path Units. Dec 13 01:25:47.966117 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 01:25:47.973257 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:25:47.981706 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 01:25:47.992491 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 01:25:48.003003 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 01:25:48.003054 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:25:48.015233 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 01:25:48.015276 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:25:48.028555 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 01:25:48.028618 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 01:25:48.042685 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 01:25:48.289995 kernel: hv_netvsc 000d3af5-8f33-000d-3af5-8f33000d3af5 eth0: Data path switched from VF: enP7672s1 Dec 13 01:25:48.042735 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 01:25:48.056885 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 01:25:48.066333 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 01:25:48.078206 systemd-networkd[877]: eth0: DHCPv6 lease lost Dec 13 01:25:48.079750 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 01:25:48.086373 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 01:25:48.086595 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 01:25:48.100684 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 01:25:48.100793 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 01:25:48.114696 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 01:25:48.114756 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:25:48.136492 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 01:25:48.145831 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 01:25:48.145898 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:25:48.157586 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:25:48.157646 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:25:48.168945 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 01:25:48.168997 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 01:25:48.183968 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 01:25:48.184019 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:25:48.195902 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:25:48.229256 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 01:25:48.229450 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:25:48.244355 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 01:25:48.244402 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 01:25:48.256645 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 01:25:48.256686 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:25:48.276488 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 01:25:48.276543 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:25:48.290025 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 01:25:48.290076 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 01:25:48.301460 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:25:48.301510 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:25:48.341362 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 01:25:48.355284 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 01:25:48.355352 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:25:48.369149 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:25:48.369204 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:25:48.381299 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 01:25:48.381378 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 01:25:48.400846 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 01:25:48.400988 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 01:25:48.504898 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 01:25:48.505029 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 01:25:48.513477 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 01:25:48.525121 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 01:25:48.525186 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 01:25:48.563393 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 01:25:48.641555 systemd[1]: Switching root. Dec 13 01:25:48.664355 systemd-journald[217]: Journal stopped Dec 13 01:25:52.162929 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Dec 13 01:25:52.162951 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 01:25:52.162962 kernel: SELinux: policy capability open_perms=1 Dec 13 01:25:52.162971 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 01:25:52.162979 kernel: SELinux: policy capability always_check_network=0 Dec 13 01:25:52.162987 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 01:25:52.162996 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 01:25:52.163004 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 01:25:52.163012 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 01:25:52.163020 kernel: audit: type=1403 audit(1734053149.791:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 01:25:52.163030 systemd[1]: Successfully loaded SELinux policy in 118.500ms. Dec 13 01:25:52.163040 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.221ms. Dec 13 01:25:52.163050 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:25:52.163061 systemd[1]: Detected virtualization microsoft. Dec 13 01:25:52.163070 systemd[1]: Detected architecture arm64. Dec 13 01:25:52.163081 systemd[1]: Detected first boot. Dec 13 01:25:52.163090 systemd[1]: Hostname set to . Dec 13 01:25:52.163099 systemd[1]: Initializing machine ID from random generator. Dec 13 01:25:52.163109 zram_generator::config[1180]: No configuration found. Dec 13 01:25:52.163118 systemd[1]: Populated /etc with preset unit settings. Dec 13 01:25:52.163127 systemd[1]: Queued start job for default target multi-user.target. Dec 13 01:25:52.163138 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Dec 13 01:25:52.163148 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 01:25:52.163157 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 01:25:52.163180 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 01:25:52.163190 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 01:25:52.163199 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 01:25:52.163209 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 01:25:52.163220 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 01:25:52.163229 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 01:25:52.163239 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:25:52.163248 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:25:52.163257 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 01:25:52.163267 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 01:25:52.163277 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 01:25:52.163287 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:25:52.163296 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Dec 13 01:25:52.163307 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:25:52.163316 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 01:25:52.163326 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:25:52.163337 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:25:52.163346 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:25:52.163356 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:25:52.163365 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 01:25:52.163376 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 01:25:52.163385 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:25:52.163395 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:25:52.163404 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:25:52.163414 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:25:52.163423 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:25:52.163432 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 01:25:52.163444 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 01:25:52.163453 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 01:25:52.163463 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 01:25:52.163472 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 01:25:52.163483 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 01:25:52.163493 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 01:25:52.163504 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 01:25:52.163514 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:25:52.163524 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:25:52.163533 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 01:25:52.163543 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:25:52.163552 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:25:52.163562 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:25:52.163571 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 01:25:52.163581 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:25:52.163592 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 01:25:52.163602 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Dec 13 01:25:52.163612 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Dec 13 01:25:52.163621 kernel: loop: module loaded Dec 13 01:25:52.163631 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:25:52.163640 kernel: ACPI: bus type drm_connector registered Dec 13 01:25:52.163649 kernel: fuse: init (API version 7.39) Dec 13 01:25:52.163658 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:25:52.163669 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 01:25:52.163692 systemd-journald[1299]: Collecting audit messages is disabled. Dec 13 01:25:52.163713 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 01:25:52.163723 systemd-journald[1299]: Journal started Dec 13 01:25:52.163745 systemd-journald[1299]: Runtime Journal (/run/log/journal/2ac3a970b5b34aef931c7fa0e94a8d53) is 8.0M, max 78.5M, 70.5M free. Dec 13 01:25:52.200019 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:25:52.216181 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:25:52.217436 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 01:25:52.225243 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 01:25:52.232183 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 01:25:52.238285 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 01:25:52.245326 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 01:25:52.251786 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 01:25:52.257946 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 01:25:52.265622 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:25:52.273559 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 01:25:52.273720 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 01:25:52.281496 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:25:52.281651 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:25:52.289057 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:25:52.289221 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:25:52.296178 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:25:52.296324 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:25:52.304565 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 01:25:52.304712 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 01:25:52.311837 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:25:52.311994 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:25:52.319807 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:25:52.327992 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 01:25:52.336056 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 01:25:52.343908 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:25:52.359354 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 01:25:52.370287 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 01:25:52.379316 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 01:25:52.385682 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 01:25:52.390352 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 01:25:52.397610 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 01:25:52.403952 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:25:52.405136 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 01:25:52.411331 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:25:52.412649 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:25:52.421354 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:25:52.437319 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 01:25:52.454227 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 01:25:52.461862 systemd-journald[1299]: Time spent on flushing to /var/log/journal/2ac3a970b5b34aef931c7fa0e94a8d53 is 41.404ms for 884 entries. Dec 13 01:25:52.461862 systemd-journald[1299]: System Journal (/var/log/journal/2ac3a970b5b34aef931c7fa0e94a8d53) is 11.8M, max 2.6G, 2.6G free. Dec 13 01:25:52.574473 systemd-journald[1299]: Received client request to flush runtime journal. Dec 13 01:25:52.574528 systemd-journald[1299]: /var/log/journal/2ac3a970b5b34aef931c7fa0e94a8d53/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Dec 13 01:25:52.574551 systemd-journald[1299]: Rotating system journal. Dec 13 01:25:52.461010 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 01:25:52.473731 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 01:25:52.485419 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 01:25:52.498321 udevadm[1340]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 01:25:52.565790 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:25:52.575700 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 01:25:52.611982 systemd-tmpfiles[1338]: ACLs are not supported, ignoring. Dec 13 01:25:52.611996 systemd-tmpfiles[1338]: ACLs are not supported, ignoring. Dec 13 01:25:52.619505 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:25:52.632378 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 01:25:52.732032 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 01:25:52.747396 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:25:52.762750 systemd-tmpfiles[1361]: ACLs are not supported, ignoring. Dec 13 01:25:52.762770 systemd-tmpfiles[1361]: ACLs are not supported, ignoring. Dec 13 01:25:52.766863 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:25:53.375301 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 01:25:53.387342 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:25:53.408045 systemd-udevd[1367]: Using default interface naming scheme 'v255'. Dec 13 01:25:53.493410 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:25:53.510764 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:25:53.549236 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Dec 13 01:25:53.564291 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1372) Dec 13 01:25:53.573217 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1372) Dec 13 01:25:53.584855 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 01:25:53.634593 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 01:25:53.664205 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 01:25:53.701249 kernel: hv_vmbus: registering driver hv_balloon Dec 13 01:25:53.702304 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Dec 13 01:25:53.710768 kernel: hv_balloon: Memory hot add disabled on ARM64 Dec 13 01:25:53.728243 kernel: hv_vmbus: registering driver hyperv_fb Dec 13 01:25:53.740846 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Dec 13 01:25:53.740929 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Dec 13 01:25:53.747257 kernel: Console: switching to colour dummy device 80x25 Dec 13 01:25:53.747329 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 46 scanned by (udev-worker) (1370) Dec 13 01:25:53.758172 kernel: Console: switching to colour frame buffer device 128x48 Dec 13 01:25:53.798937 systemd-networkd[1382]: lo: Link UP Dec 13 01:25:53.798951 systemd-networkd[1382]: lo: Gained carrier Dec 13 01:25:53.800935 systemd-networkd[1382]: Enumeration completed Dec 13 01:25:53.801713 systemd-networkd[1382]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:25:53.801770 systemd-networkd[1382]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:25:53.803717 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:25:53.834982 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Dec 13 01:25:53.855826 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 01:25:53.857509 kernel: mlx5_core 1df8:00:02.0 enP7672s1: Link up Dec 13 01:25:53.864176 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:25:53.881900 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:25:53.882152 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:25:53.889196 kernel: hv_netvsc 000d3af5-8f33-000d-3af5-8f33000d3af5 eth0: Data path switched to VF: enP7672s1 Dec 13 01:25:53.896278 systemd-networkd[1382]: enP7672s1: Link UP Dec 13 01:25:53.896926 systemd-networkd[1382]: eth0: Link UP Dec 13 01:25:53.896932 systemd-networkd[1382]: eth0: Gained carrier Dec 13 01:25:53.896949 systemd-networkd[1382]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:25:53.902414 systemd-networkd[1382]: enP7672s1: Gained carrier Dec 13 01:25:53.903311 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:25:53.911213 systemd-networkd[1382]: eth0: DHCPv4 address 10.200.20.34/24, gateway 10.200.20.1 acquired from 168.63.129.16 Dec 13 01:25:53.967062 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 01:25:53.979430 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 01:25:54.036196 lvm[1457]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:25:54.062095 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 01:25:54.070520 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:25:54.083295 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 01:25:54.093326 lvm[1460]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:25:54.116044 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 01:25:54.124307 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:25:54.131721 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 01:25:54.131886 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:25:54.138310 systemd[1]: Reached target machines.target - Containers. Dec 13 01:25:54.145461 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 01:25:54.156331 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 01:25:54.164303 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 01:25:54.174081 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:25:54.175038 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 01:25:54.186060 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 01:25:54.203362 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 01:25:54.210768 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 01:25:54.221039 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 01:25:54.230286 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:25:54.245192 kernel: loop0: detected capacity change from 0 to 31320 Dec 13 01:25:54.281548 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 01:25:54.282283 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 01:25:54.576266 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 01:25:54.633187 kernel: loop1: detected capacity change from 0 to 114328 Dec 13 01:25:54.903189 kernel: loop2: detected capacity change from 0 to 114432 Dec 13 01:25:55.212182 kernel: loop3: detected capacity change from 0 to 194512 Dec 13 01:25:55.249198 kernel: loop4: detected capacity change from 0 to 31320 Dec 13 01:25:55.257179 kernel: loop5: detected capacity change from 0 to 114328 Dec 13 01:25:55.264303 kernel: loop6: detected capacity change from 0 to 114432 Dec 13 01:25:55.272191 kernel: loop7: detected capacity change from 0 to 194512 Dec 13 01:25:55.275485 (sd-merge)[1485]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Dec 13 01:25:55.275898 (sd-merge)[1485]: Merged extensions into '/usr'. Dec 13 01:25:55.279766 systemd[1]: Reloading requested from client PID 1470 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 01:25:55.279784 systemd[1]: Reloading... Dec 13 01:25:55.340200 zram_generator::config[1510]: No configuration found. Dec 13 01:25:55.471018 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:25:55.541794 systemd[1]: Reloading finished in 261 ms. Dec 13 01:25:55.558840 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 01:25:55.581371 systemd[1]: Starting ensure-sysext.service... Dec 13 01:25:55.589358 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:25:55.593292 systemd-networkd[1382]: enP7672s1: Gained IPv6LL Dec 13 01:25:55.609020 systemd[1]: Reloading requested from client PID 1574 ('systemctl') (unit ensure-sysext.service)... Dec 13 01:25:55.609036 systemd[1]: Reloading... Dec 13 01:25:55.626379 systemd-tmpfiles[1575]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 01:25:55.626961 systemd-tmpfiles[1575]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 01:25:55.628284 systemd-tmpfiles[1575]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 01:25:55.628510 systemd-tmpfiles[1575]: ACLs are not supported, ignoring. Dec 13 01:25:55.628557 systemd-tmpfiles[1575]: ACLs are not supported, ignoring. Dec 13 01:25:55.645176 systemd-tmpfiles[1575]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:25:55.645664 systemd-tmpfiles[1575]: Skipping /boot Dec 13 01:25:55.656619 systemd-tmpfiles[1575]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:25:55.656633 systemd-tmpfiles[1575]: Skipping /boot Dec 13 01:25:55.694202 zram_generator::config[1605]: No configuration found. Dec 13 01:25:55.785256 systemd-networkd[1382]: eth0: Gained IPv6LL Dec 13 01:25:55.803634 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:25:55.875727 systemd[1]: Reloading finished in 266 ms. Dec 13 01:25:55.892082 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 01:25:55.903680 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:25:55.919290 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:25:55.948318 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 01:25:55.956961 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 01:25:55.968318 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:25:55.976007 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 01:25:55.993264 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:25:55.995254 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:25:56.022897 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:25:56.039145 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:25:56.049789 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:25:56.051145 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 01:25:56.060779 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:25:56.061057 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:25:56.069069 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:25:56.069385 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:25:56.076542 augenrules[1697]: No rules Dec 13 01:25:56.078098 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:25:56.086256 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:25:56.086584 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:25:56.095654 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 01:25:56.111478 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:25:56.114205 systemd-resolved[1681]: Positive Trust Anchors: Dec 13 01:25:56.114219 systemd-resolved[1681]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:25:56.114252 systemd-resolved[1681]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:25:56.125461 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:25:56.133111 systemd-resolved[1681]: Using system hostname 'ci-4081.2.1-a-a2790899e3'. Dec 13 01:25:56.133466 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:25:56.142391 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:25:56.148202 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:25:56.148786 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:25:56.156278 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:25:56.156434 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:25:56.163819 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:25:56.163974 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:25:56.172462 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:25:56.172701 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:25:56.184528 systemd[1]: Reached target network.target - Network. Dec 13 01:25:56.189972 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 01:25:56.196665 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:25:56.204603 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:25:56.210401 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:25:56.217913 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:25:56.225891 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:25:56.235417 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:25:56.241692 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:25:56.241864 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 01:25:56.254396 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:25:56.254575 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:25:56.263566 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:25:56.263732 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:25:56.270527 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:25:56.270683 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:25:56.278845 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:25:56.278997 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:25:56.291338 systemd[1]: Finished ensure-sysext.service. Dec 13 01:25:56.297952 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:25:56.298028 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:25:56.648874 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 01:25:56.656526 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:25:58.581860 ldconfig[1465]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 01:25:58.592249 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 01:25:58.602327 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 01:25:58.616581 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 01:25:58.623031 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:25:58.629361 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 01:25:58.636033 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 01:25:58.643064 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 01:25:58.649055 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 01:25:58.656021 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 01:25:58.662969 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 01:25:58.663002 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:25:58.667914 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:25:58.686799 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 01:25:58.694799 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 01:25:58.700916 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 01:25:58.708288 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 01:25:58.714411 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:25:58.719463 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:25:58.724761 systemd[1]: System is tainted: cgroupsv1 Dec 13 01:25:58.724806 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:25:58.724825 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:25:58.727092 systemd[1]: Starting chronyd.service - NTP client/server... Dec 13 01:25:58.735300 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 01:25:58.755351 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 13 01:25:58.766341 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 01:25:58.774822 (chronyd)[1747]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Dec 13 01:25:58.775835 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 01:25:58.784285 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 01:25:58.792792 jq[1754]: false Dec 13 01:25:58.794938 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 01:25:58.794984 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Dec 13 01:25:58.796560 chronyd[1758]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Dec 13 01:25:58.797396 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Dec 13 01:25:58.803629 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Dec 13 01:25:58.805833 KVP[1759]: KVP starting; pid is:1759 Dec 13 01:25:58.806361 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:25:58.822358 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 01:25:58.833334 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 01:25:58.831704 chronyd[1758]: Timezone right/UTC failed leap second check, ignoring Dec 13 01:25:58.831901 chronyd[1758]: Loaded seccomp filter (level 2) Dec 13 01:25:58.844290 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 01:25:58.850298 extend-filesystems[1755]: Found loop4 Dec 13 01:25:58.850298 extend-filesystems[1755]: Found loop5 Dec 13 01:25:58.850298 extend-filesystems[1755]: Found loop6 Dec 13 01:25:58.850298 extend-filesystems[1755]: Found loop7 Dec 13 01:25:58.850298 extend-filesystems[1755]: Found sda Dec 13 01:25:58.850298 extend-filesystems[1755]: Found sda1 Dec 13 01:25:58.850298 extend-filesystems[1755]: Found sda2 Dec 13 01:25:58.850298 extend-filesystems[1755]: Found sda3 Dec 13 01:25:58.850298 extend-filesystems[1755]: Found usr Dec 13 01:25:58.850298 extend-filesystems[1755]: Found sda4 Dec 13 01:25:58.850298 extend-filesystems[1755]: Found sda6 Dec 13 01:25:58.850298 extend-filesystems[1755]: Found sda7 Dec 13 01:25:58.850298 extend-filesystems[1755]: Found sda9 Dec 13 01:25:58.850298 extend-filesystems[1755]: Checking size of /dev/sda9 Dec 13 01:25:59.061862 kernel: hv_utils: KVP IC version 4.0 Dec 13 01:25:59.061888 coreos-metadata[1749]: Dec 13 01:25:58.966 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 13 01:25:59.061888 coreos-metadata[1749]: Dec 13 01:25:58.968 INFO Fetch successful Dec 13 01:25:59.061888 coreos-metadata[1749]: Dec 13 01:25:58.968 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Dec 13 01:25:59.061888 coreos-metadata[1749]: Dec 13 01:25:58.976 INFO Fetch successful Dec 13 01:25:59.061888 coreos-metadata[1749]: Dec 13 01:25:58.977 INFO Fetching http://168.63.129.16/machine/6e85050a-8f6a-48c8-b964-20f63fa1b2e6/8b53f638%2Daab5%2D46a9%2D82b1%2D698bec58c0eb.%5Fci%2D4081.2.1%2Da%2Da2790899e3?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Dec 13 01:25:59.061888 coreos-metadata[1749]: Dec 13 01:25:58.979 INFO Fetch successful Dec 13 01:25:59.061888 coreos-metadata[1749]: Dec 13 01:25:58.980 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Dec 13 01:25:59.061888 coreos-metadata[1749]: Dec 13 01:25:59.003 INFO Fetch successful Dec 13 01:25:58.854065 KVP[1759]: KVP LIC Version: 3.1 Dec 13 01:25:58.864554 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 01:25:59.062294 extend-filesystems[1755]: Old size kept for /dev/sda9 Dec 13 01:25:59.062294 extend-filesystems[1755]: Found sr0 Dec 13 01:25:58.862333 dbus-daemon[1752]: [system] SELinux support is enabled Dec 13 01:25:58.884153 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 01:25:58.906355 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 01:25:58.923051 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 01:25:58.929424 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 01:25:59.119913 update_engine[1790]: I20241213 01:25:58.995990 1790 main.cc:92] Flatcar Update Engine starting Dec 13 01:25:59.119913 update_engine[1790]: I20241213 01:25:59.019363 1790 update_check_scheduler.cc:74] Next update check in 5m49s Dec 13 01:25:58.957920 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 01:25:59.120188 jq[1795]: true Dec 13 01:25:58.970283 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 01:25:58.981688 systemd[1]: Started chronyd.service - NTP client/server. Dec 13 01:25:59.003585 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 01:25:59.003964 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 01:25:59.004253 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 01:25:59.120708 jq[1810]: true Dec 13 01:25:59.004451 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 01:25:59.029490 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 01:25:59.029718 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 01:25:59.046955 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 01:25:59.061508 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 01:25:59.061733 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 01:25:59.097650 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 01:25:59.097681 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 01:25:59.099982 systemd-logind[1787]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Dec 13 01:25:59.101417 systemd-logind[1787]: New seat seat0. Dec 13 01:25:59.101750 (ntainerd)[1811]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 01:25:59.122808 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 01:25:59.122829 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 01:25:59.144926 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 01:25:59.180197 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 46 scanned by (udev-worker) (1821) Dec 13 01:25:59.192296 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 13 01:25:59.212269 systemd[1]: Started update-engine.service - Update Engine. Dec 13 01:25:59.220769 tar[1806]: linux-arm64/helm Dec 13 01:25:59.223321 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 01:25:59.223942 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 01:25:59.231532 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 01:25:59.335956 bash[1882]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:25:59.341509 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 01:25:59.358233 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 13 01:25:59.499509 sshd_keygen[1793]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 01:25:59.517139 locksmithd[1867]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 01:25:59.552623 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 01:25:59.567404 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 01:25:59.584428 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Dec 13 01:25:59.593840 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 01:25:59.594051 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 01:25:59.614409 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 01:25:59.643305 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Dec 13 01:25:59.659020 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 01:25:59.671715 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 01:25:59.684511 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Dec 13 01:25:59.696306 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 01:25:59.790216 containerd[1811]: time="2024-12-13T01:25:59.788741880Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 13 01:25:59.833316 containerd[1811]: time="2024-12-13T01:25:59.833220320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:25:59.838462 containerd[1811]: time="2024-12-13T01:25:59.837297320Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:25:59.838462 containerd[1811]: time="2024-12-13T01:25:59.837335320Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 01:25:59.838462 containerd[1811]: time="2024-12-13T01:25:59.837352640Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 01:25:59.838462 containerd[1811]: time="2024-12-13T01:25:59.837505000Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 01:25:59.838462 containerd[1811]: time="2024-12-13T01:25:59.837521200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 01:25:59.838462 containerd[1811]: time="2024-12-13T01:25:59.837578640Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:25:59.838462 containerd[1811]: time="2024-12-13T01:25:59.837590240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:25:59.838462 containerd[1811]: time="2024-12-13T01:25:59.837781160Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:25:59.838462 containerd[1811]: time="2024-12-13T01:25:59.837796360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 01:25:59.838462 containerd[1811]: time="2024-12-13T01:25:59.837808360Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:25:59.838462 containerd[1811]: time="2024-12-13T01:25:59.837817560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 01:25:59.838742 containerd[1811]: time="2024-12-13T01:25:59.837882480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:25:59.838742 containerd[1811]: time="2024-12-13T01:25:59.838059600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:25:59.838742 containerd[1811]: time="2024-12-13T01:25:59.838198280Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:25:59.838742 containerd[1811]: time="2024-12-13T01:25:59.838216240Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 01:25:59.838742 containerd[1811]: time="2024-12-13T01:25:59.838295760Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 01:25:59.838742 containerd[1811]: time="2024-12-13T01:25:59.838335160Z" level=info msg="metadata content store policy set" policy=shared Dec 13 01:25:59.849579 tar[1806]: linux-arm64/LICENSE Dec 13 01:25:59.849751 tar[1806]: linux-arm64/README.md Dec 13 01:25:59.868961 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 01:25:59.876418 containerd[1811]: time="2024-12-13T01:25:59.875036080Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 01:25:59.876418 containerd[1811]: time="2024-12-13T01:25:59.875096960Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 01:25:59.876418 containerd[1811]: time="2024-12-13T01:25:59.875114320Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 01:25:59.876418 containerd[1811]: time="2024-12-13T01:25:59.875129880Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 01:25:59.876418 containerd[1811]: time="2024-12-13T01:25:59.875144520Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 01:25:59.876418 containerd[1811]: time="2024-12-13T01:25:59.875318920Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 01:25:59.876418 containerd[1811]: time="2024-12-13T01:25:59.875613040Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 01:25:59.876418 containerd[1811]: time="2024-12-13T01:25:59.875702960Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 01:25:59.876418 containerd[1811]: time="2024-12-13T01:25:59.875718200Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 01:25:59.876418 containerd[1811]: time="2024-12-13T01:25:59.875732640Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 01:25:59.876418 containerd[1811]: time="2024-12-13T01:25:59.875746240Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 01:25:59.876418 containerd[1811]: time="2024-12-13T01:25:59.875759640Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 01:25:59.876418 containerd[1811]: time="2024-12-13T01:25:59.875772200Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 01:25:59.876418 containerd[1811]: time="2024-12-13T01:25:59.875791040Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 01:25:59.876693 containerd[1811]: time="2024-12-13T01:25:59.875806240Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 01:25:59.876693 containerd[1811]: time="2024-12-13T01:25:59.875819160Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 01:25:59.876693 containerd[1811]: time="2024-12-13T01:25:59.875832600Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 01:25:59.876693 containerd[1811]: time="2024-12-13T01:25:59.875845160Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 01:25:59.876693 containerd[1811]: time="2024-12-13T01:25:59.875864040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 01:25:59.876693 containerd[1811]: time="2024-12-13T01:25:59.875877360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 01:25:59.876693 containerd[1811]: time="2024-12-13T01:25:59.875890240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 01:25:59.876693 containerd[1811]: time="2024-12-13T01:25:59.875903160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 01:25:59.876693 containerd[1811]: time="2024-12-13T01:25:59.875914640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 01:25:59.876693 containerd[1811]: time="2024-12-13T01:25:59.875926720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 01:25:59.876693 containerd[1811]: time="2024-12-13T01:25:59.875938160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 01:25:59.876693 containerd[1811]: time="2024-12-13T01:25:59.875950560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 01:25:59.876693 containerd[1811]: time="2024-12-13T01:25:59.875962760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 01:25:59.876693 containerd[1811]: time="2024-12-13T01:25:59.875977400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 01:25:59.876927 containerd[1811]: time="2024-12-13T01:25:59.875989280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 01:25:59.876927 containerd[1811]: time="2024-12-13T01:25:59.876001120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 01:25:59.876927 containerd[1811]: time="2024-12-13T01:25:59.876015440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 01:25:59.876927 containerd[1811]: time="2024-12-13T01:25:59.876031680Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 01:25:59.876927 containerd[1811]: time="2024-12-13T01:25:59.876053000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 01:25:59.876927 containerd[1811]: time="2024-12-13T01:25:59.876064840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 01:25:59.876927 containerd[1811]: time="2024-12-13T01:25:59.876077800Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 01:25:59.876927 containerd[1811]: time="2024-12-13T01:25:59.876124600Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 01:25:59.877212 containerd[1811]: time="2024-12-13T01:25:59.877188600Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 01:25:59.877289 containerd[1811]: time="2024-12-13T01:25:59.877276640Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 01:25:59.877356 containerd[1811]: time="2024-12-13T01:25:59.877331640Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 01:25:59.877416 containerd[1811]: time="2024-12-13T01:25:59.877402880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 01:25:59.877498 containerd[1811]: time="2024-12-13T01:25:59.877486080Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 01:25:59.877571 containerd[1811]: time="2024-12-13T01:25:59.877559880Z" level=info msg="NRI interface is disabled by configuration." Dec 13 01:25:59.877620 containerd[1811]: time="2024-12-13T01:25:59.877608720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 01:25:59.878038 containerd[1811]: time="2024-12-13T01:25:59.877980000Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 01:25:59.878258 containerd[1811]: time="2024-12-13T01:25:59.878078480Z" level=info msg="Connect containerd service" Dec 13 01:25:59.878258 containerd[1811]: time="2024-12-13T01:25:59.878110960Z" level=info msg="using legacy CRI server" Dec 13 01:25:59.878258 containerd[1811]: time="2024-12-13T01:25:59.878117960Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 01:25:59.878609 containerd[1811]: time="2024-12-13T01:25:59.878467280Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 01:25:59.881381 containerd[1811]: time="2024-12-13T01:25:59.881345520Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:25:59.882201 containerd[1811]: time="2024-12-13T01:25:59.881627320Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 01:25:59.882201 containerd[1811]: time="2024-12-13T01:25:59.881671440Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 01:25:59.882201 containerd[1811]: time="2024-12-13T01:25:59.881706080Z" level=info msg="Start subscribing containerd event" Dec 13 01:25:59.882201 containerd[1811]: time="2024-12-13T01:25:59.881740840Z" level=info msg="Start recovering state" Dec 13 01:25:59.882201 containerd[1811]: time="2024-12-13T01:25:59.881803520Z" level=info msg="Start event monitor" Dec 13 01:25:59.882201 containerd[1811]: time="2024-12-13T01:25:59.881814680Z" level=info msg="Start snapshots syncer" Dec 13 01:25:59.882201 containerd[1811]: time="2024-12-13T01:25:59.881825920Z" level=info msg="Start cni network conf syncer for default" Dec 13 01:25:59.882201 containerd[1811]: time="2024-12-13T01:25:59.881833880Z" level=info msg="Start streaming server" Dec 13 01:25:59.881976 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 01:25:59.883285 containerd[1811]: time="2024-12-13T01:25:59.883264960Z" level=info msg="containerd successfully booted in 0.095283s" Dec 13 01:25:59.978385 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:25:59.986154 (kubelet)[1942]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:25:59.986961 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 01:25:59.994328 systemd[1]: Startup finished in 12.672s (kernel) + 10.319s (userspace) = 22.992s. Dec 13 01:26:00.307549 login[1923]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:26:00.309760 login[1924]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:26:00.322053 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 01:26:00.326417 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 01:26:00.332903 systemd-logind[1787]: New session 2 of user core. Dec 13 01:26:00.339247 systemd-logind[1787]: New session 1 of user core. Dec 13 01:26:00.345923 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 01:26:00.352899 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 01:26:00.367188 (systemd)[1956]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:26:00.501181 kubelet[1942]: E1213 01:26:00.498978 1942 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:26:00.505603 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:26:00.505801 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:26:00.516191 systemd[1956]: Queued start job for default target default.target. Dec 13 01:26:00.517453 systemd[1956]: Created slice app.slice - User Application Slice. Dec 13 01:26:00.517480 systemd[1956]: Reached target paths.target - Paths. Dec 13 01:26:00.517491 systemd[1956]: Reached target timers.target - Timers. Dec 13 01:26:00.521393 systemd[1956]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 01:26:00.529673 systemd[1956]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 01:26:00.529735 systemd[1956]: Reached target sockets.target - Sockets. Dec 13 01:26:00.529748 systemd[1956]: Reached target basic.target - Basic System. Dec 13 01:26:00.529788 systemd[1956]: Reached target default.target - Main User Target. Dec 13 01:26:00.529812 systemd[1956]: Startup finished in 155ms. Dec 13 01:26:00.529901 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 01:26:00.531045 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 01:26:00.531656 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 01:26:01.275473 waagent[1920]: 2024-12-13T01:26:01.275381Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Dec 13 01:26:01.281208 waagent[1920]: 2024-12-13T01:26:01.281132Z INFO Daemon Daemon OS: flatcar 4081.2.1 Dec 13 01:26:01.285718 waagent[1920]: 2024-12-13T01:26:01.285666Z INFO Daemon Daemon Python: 3.11.9 Dec 13 01:26:01.291289 waagent[1920]: 2024-12-13T01:26:01.291231Z INFO Daemon Daemon Run daemon Dec 13 01:26:01.297165 waagent[1920]: 2024-12-13T01:26:01.295531Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.2.1' Dec 13 01:26:01.305078 waagent[1920]: 2024-12-13T01:26:01.305019Z INFO Daemon Daemon Using waagent for provisioning Dec 13 01:26:01.310612 waagent[1920]: 2024-12-13T01:26:01.310568Z INFO Daemon Daemon Activate resource disk Dec 13 01:26:01.315226 waagent[1920]: 2024-12-13T01:26:01.315182Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Dec 13 01:26:01.326346 waagent[1920]: 2024-12-13T01:26:01.326296Z INFO Daemon Daemon Found device: None Dec 13 01:26:01.331142 waagent[1920]: 2024-12-13T01:26:01.331097Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Dec 13 01:26:01.339702 waagent[1920]: 2024-12-13T01:26:01.339656Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Dec 13 01:26:01.352446 waagent[1920]: 2024-12-13T01:26:01.352388Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 13 01:26:01.358130 waagent[1920]: 2024-12-13T01:26:01.358080Z INFO Daemon Daemon Running default provisioning handler Dec 13 01:26:01.369609 waagent[1920]: 2024-12-13T01:26:01.369526Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Dec 13 01:26:01.382919 waagent[1920]: 2024-12-13T01:26:01.382853Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Dec 13 01:26:01.392386 waagent[1920]: 2024-12-13T01:26:01.392331Z INFO Daemon Daemon cloud-init is enabled: False Dec 13 01:26:01.397289 waagent[1920]: 2024-12-13T01:26:01.397244Z INFO Daemon Daemon Copying ovf-env.xml Dec 13 01:26:01.500952 waagent[1920]: 2024-12-13T01:26:01.500046Z INFO Daemon Daemon Successfully mounted dvd Dec 13 01:26:01.514991 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Dec 13 01:26:01.516610 waagent[1920]: 2024-12-13T01:26:01.516110Z INFO Daemon Daemon Detect protocol endpoint Dec 13 01:26:01.521666 waagent[1920]: 2024-12-13T01:26:01.521606Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 13 01:26:01.527766 waagent[1920]: 2024-12-13T01:26:01.527678Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Dec 13 01:26:01.534957 waagent[1920]: 2024-12-13T01:26:01.534903Z INFO Daemon Daemon Test for route to 168.63.129.16 Dec 13 01:26:01.540766 waagent[1920]: 2024-12-13T01:26:01.540714Z INFO Daemon Daemon Route to 168.63.129.16 exists Dec 13 01:26:01.546402 waagent[1920]: 2024-12-13T01:26:01.546353Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Dec 13 01:26:01.591478 waagent[1920]: 2024-12-13T01:26:01.591429Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Dec 13 01:26:01.599051 waagent[1920]: 2024-12-13T01:26:01.599022Z INFO Daemon Daemon Wire protocol version:2012-11-30 Dec 13 01:26:01.604964 waagent[1920]: 2024-12-13T01:26:01.604916Z INFO Daemon Daemon Server preferred version:2015-04-05 Dec 13 01:26:01.721650 waagent[1920]: 2024-12-13T01:26:01.721546Z INFO Daemon Daemon Initializing goal state during protocol detection Dec 13 01:26:01.728761 waagent[1920]: 2024-12-13T01:26:01.728695Z INFO Daemon Daemon Forcing an update of the goal state. Dec 13 01:26:01.738569 waagent[1920]: 2024-12-13T01:26:01.738515Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Dec 13 01:26:01.758591 waagent[1920]: 2024-12-13T01:26:01.758543Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.159 Dec 13 01:26:01.764832 waagent[1920]: 2024-12-13T01:26:01.764785Z INFO Daemon Dec 13 01:26:01.768121 waagent[1920]: 2024-12-13T01:26:01.768076Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: d91988f3-679a-41fe-bf27-d9963e6a44ce eTag: 770276530206113062 source: Fabric] Dec 13 01:26:01.780489 waagent[1920]: 2024-12-13T01:26:01.780414Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Dec 13 01:26:01.787860 waagent[1920]: 2024-12-13T01:26:01.787814Z INFO Daemon Dec 13 01:26:01.790879 waagent[1920]: 2024-12-13T01:26:01.790837Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Dec 13 01:26:01.803461 waagent[1920]: 2024-12-13T01:26:01.803426Z INFO Daemon Daemon Downloading artifacts profile blob Dec 13 01:26:01.897239 waagent[1920]: 2024-12-13T01:26:01.897124Z INFO Daemon Downloaded certificate {'thumbprint': '9A04713617EF7F6A8BE4A74948482535CBE9B7B2', 'hasPrivateKey': False} Dec 13 01:26:01.906977 waagent[1920]: 2024-12-13T01:26:01.906929Z INFO Daemon Downloaded certificate {'thumbprint': '2F20D5D778771011803C8D5F4FF722DC834FD8F3', 'hasPrivateKey': True} Dec 13 01:26:01.917633 waagent[1920]: 2024-12-13T01:26:01.917583Z INFO Daemon Fetch goal state completed Dec 13 01:26:01.928629 waagent[1920]: 2024-12-13T01:26:01.928581Z INFO Daemon Daemon Starting provisioning Dec 13 01:26:01.933779 waagent[1920]: 2024-12-13T01:26:01.933718Z INFO Daemon Daemon Handle ovf-env.xml. Dec 13 01:26:01.938834 waagent[1920]: 2024-12-13T01:26:01.938783Z INFO Daemon Daemon Set hostname [ci-4081.2.1-a-a2790899e3] Dec 13 01:26:01.959321 waagent[1920]: 2024-12-13T01:26:01.959248Z INFO Daemon Daemon Publish hostname [ci-4081.2.1-a-a2790899e3] Dec 13 01:26:01.965714 waagent[1920]: 2024-12-13T01:26:01.965650Z INFO Daemon Daemon Examine /proc/net/route for primary interface Dec 13 01:26:01.972014 waagent[1920]: 2024-12-13T01:26:01.971955Z INFO Daemon Daemon Primary interface is [eth0] Dec 13 01:26:02.000244 systemd-networkd[1382]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:26:02.000252 systemd-networkd[1382]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:26:02.000279 systemd-networkd[1382]: eth0: DHCP lease lost Dec 13 01:26:02.001438 waagent[1920]: 2024-12-13T01:26:02.001357Z INFO Daemon Daemon Create user account if not exists Dec 13 01:26:02.007382 waagent[1920]: 2024-12-13T01:26:02.007321Z INFO Daemon Daemon User core already exists, skip useradd Dec 13 01:26:02.008244 systemd-networkd[1382]: eth0: DHCPv6 lease lost Dec 13 01:26:02.013059 waagent[1920]: 2024-12-13T01:26:02.012995Z INFO Daemon Daemon Configure sudoer Dec 13 01:26:02.017591 waagent[1920]: 2024-12-13T01:26:02.017536Z INFO Daemon Daemon Configure sshd Dec 13 01:26:02.022449 waagent[1920]: 2024-12-13T01:26:02.022394Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Dec 13 01:26:02.034862 waagent[1920]: 2024-12-13T01:26:02.034752Z INFO Daemon Daemon Deploy ssh public key. Dec 13 01:26:02.053205 systemd-networkd[1382]: eth0: DHCPv4 address 10.200.20.34/24, gateway 10.200.20.1 acquired from 168.63.129.16 Dec 13 01:26:03.152029 waagent[1920]: 2024-12-13T01:26:03.151972Z INFO Daemon Daemon Provisioning complete Dec 13 01:26:03.170841 waagent[1920]: 2024-12-13T01:26:03.170790Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Dec 13 01:26:03.177083 waagent[1920]: 2024-12-13T01:26:03.177032Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Dec 13 01:26:03.186600 waagent[1920]: 2024-12-13T01:26:03.186546Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Dec 13 01:26:03.316797 waagent[2019]: 2024-12-13T01:26:03.316149Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Dec 13 01:26:03.316797 waagent[2019]: 2024-12-13T01:26:03.316317Z INFO ExtHandler ExtHandler OS: flatcar 4081.2.1 Dec 13 01:26:03.316797 waagent[2019]: 2024-12-13T01:26:03.316382Z INFO ExtHandler ExtHandler Python: 3.11.9 Dec 13 01:26:03.349440 waagent[2019]: 2024-12-13T01:26:03.349361Z INFO ExtHandler ExtHandler Distro: flatcar-4081.2.1; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Dec 13 01:26:03.349748 waagent[2019]: 2024-12-13T01:26:03.349711Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 01:26:03.349879 waagent[2019]: 2024-12-13T01:26:03.349847Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 01:26:03.358138 waagent[2019]: 2024-12-13T01:26:03.358084Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Dec 13 01:26:03.366762 waagent[2019]: 2024-12-13T01:26:03.366719Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.159 Dec 13 01:26:03.367409 waagent[2019]: 2024-12-13T01:26:03.367370Z INFO ExtHandler Dec 13 01:26:03.367570 waagent[2019]: 2024-12-13T01:26:03.367538Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 2ce83409-6150-4497-a71c-78513968b143 eTag: 770276530206113062 source: Fabric] Dec 13 01:26:03.367928 waagent[2019]: 2024-12-13T01:26:03.367892Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Dec 13 01:26:03.368593 waagent[2019]: 2024-12-13T01:26:03.368552Z INFO ExtHandler Dec 13 01:26:03.369191 waagent[2019]: 2024-12-13T01:26:03.368708Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Dec 13 01:26:03.372515 waagent[2019]: 2024-12-13T01:26:03.372479Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Dec 13 01:26:03.453047 waagent[2019]: 2024-12-13T01:26:03.452899Z INFO ExtHandler Downloaded certificate {'thumbprint': '9A04713617EF7F6A8BE4A74948482535CBE9B7B2', 'hasPrivateKey': False} Dec 13 01:26:03.453441 waagent[2019]: 2024-12-13T01:26:03.453396Z INFO ExtHandler Downloaded certificate {'thumbprint': '2F20D5D778771011803C8D5F4FF722DC834FD8F3', 'hasPrivateKey': True} Dec 13 01:26:03.453890 waagent[2019]: 2024-12-13T01:26:03.453845Z INFO ExtHandler Fetch goal state completed Dec 13 01:26:03.470343 waagent[2019]: 2024-12-13T01:26:03.470285Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 2019 Dec 13 01:26:03.470498 waagent[2019]: 2024-12-13T01:26:03.470463Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Dec 13 01:26:03.472171 waagent[2019]: 2024-12-13T01:26:03.472112Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.2.1', '', 'Flatcar Container Linux by Kinvolk'] Dec 13 01:26:03.472568 waagent[2019]: 2024-12-13T01:26:03.472525Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Dec 13 01:26:03.505948 waagent[2019]: 2024-12-13T01:26:03.505902Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Dec 13 01:26:03.506176 waagent[2019]: 2024-12-13T01:26:03.506123Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Dec 13 01:26:03.512502 waagent[2019]: 2024-12-13T01:26:03.511991Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Dec 13 01:26:03.518592 systemd[1]: Reloading requested from client PID 2034 ('systemctl') (unit waagent.service)... Dec 13 01:26:03.518605 systemd[1]: Reloading... Dec 13 01:26:03.591310 zram_generator::config[2068]: No configuration found. Dec 13 01:26:03.700487 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:26:03.774196 systemd[1]: Reloading finished in 255 ms. Dec 13 01:26:03.794488 waagent[2019]: 2024-12-13T01:26:03.794345Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Dec 13 01:26:03.799997 systemd[1]: Reloading requested from client PID 2127 ('systemctl') (unit waagent.service)... Dec 13 01:26:03.800108 systemd[1]: Reloading... Dec 13 01:26:03.884203 zram_generator::config[2167]: No configuration found. Dec 13 01:26:03.986716 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:26:04.060874 systemd[1]: Reloading finished in 260 ms. Dec 13 01:26:04.084414 waagent[2019]: 2024-12-13T01:26:04.083636Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Dec 13 01:26:04.084414 waagent[2019]: 2024-12-13T01:26:04.083809Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Dec 13 01:26:04.388823 waagent[2019]: 2024-12-13T01:26:04.388695Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Dec 13 01:26:04.389389 waagent[2019]: 2024-12-13T01:26:04.389336Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Dec 13 01:26:04.390225 waagent[2019]: 2024-12-13T01:26:04.390125Z INFO ExtHandler ExtHandler Starting env monitor service. Dec 13 01:26:04.390750 waagent[2019]: 2024-12-13T01:26:04.390580Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Dec 13 01:26:04.390993 waagent[2019]: 2024-12-13T01:26:04.390948Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 01:26:04.391889 waagent[2019]: 2024-12-13T01:26:04.391058Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 01:26:04.391889 waagent[2019]: 2024-12-13T01:26:04.391140Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 01:26:04.391889 waagent[2019]: 2024-12-13T01:26:04.391307Z INFO EnvHandler ExtHandler Configure routes Dec 13 01:26:04.391889 waagent[2019]: 2024-12-13T01:26:04.391372Z INFO EnvHandler ExtHandler Gateway:None Dec 13 01:26:04.391889 waagent[2019]: 2024-12-13T01:26:04.391415Z INFO EnvHandler ExtHandler Routes:None Dec 13 01:26:04.392150 waagent[2019]: 2024-12-13T01:26:04.392112Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 01:26:04.392445 waagent[2019]: 2024-12-13T01:26:04.392406Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Dec 13 01:26:04.392682 waagent[2019]: 2024-12-13T01:26:04.392645Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Dec 13 01:26:04.392682 waagent[2019]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Dec 13 01:26:04.392682 waagent[2019]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Dec 13 01:26:04.392682 waagent[2019]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Dec 13 01:26:04.392682 waagent[2019]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Dec 13 01:26:04.392682 waagent[2019]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 13 01:26:04.392682 waagent[2019]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 13 01:26:04.392949 waagent[2019]: 2024-12-13T01:26:04.392907Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Dec 13 01:26:04.393413 waagent[2019]: 2024-12-13T01:26:04.393371Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Dec 13 01:26:04.394003 waagent[2019]: 2024-12-13T01:26:04.393940Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Dec 13 01:26:04.394140 waagent[2019]: 2024-12-13T01:26:04.394089Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Dec 13 01:26:04.394705 waagent[2019]: 2024-12-13T01:26:04.394646Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Dec 13 01:26:04.403582 waagent[2019]: 2024-12-13T01:26:04.403530Z INFO ExtHandler ExtHandler Dec 13 01:26:04.403964 waagent[2019]: 2024-12-13T01:26:04.403916Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: dbdcdbc5-529d-4001-b6c4-9d44fed2615a correlation 94205045-38d8-4b23-8198-02f842fd56a6 created: 2024-12-13T01:24:56.748161Z] Dec 13 01:26:04.405219 waagent[2019]: 2024-12-13T01:26:04.405153Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Dec 13 01:26:04.406237 waagent[2019]: 2024-12-13T01:26:04.406194Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 2 ms] Dec 13 01:26:04.438858 waagent[2019]: 2024-12-13T01:26:04.438805Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 31E69A7C-4E8D-4B95-983C-482F3821B8B8;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Dec 13 01:26:04.444190 waagent[2019]: 2024-12-13T01:26:04.443786Z INFO MonitorHandler ExtHandler Network interfaces: Dec 13 01:26:04.444190 waagent[2019]: Executing ['ip', '-a', '-o', 'link']: Dec 13 01:26:04.444190 waagent[2019]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Dec 13 01:26:04.444190 waagent[2019]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:f5:8f:33 brd ff:ff:ff:ff:ff:ff Dec 13 01:26:04.444190 waagent[2019]: 3: enP7672s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:f5:8f:33 brd ff:ff:ff:ff:ff:ff\ altname enP7672p0s2 Dec 13 01:26:04.444190 waagent[2019]: Executing ['ip', '-4', '-a', '-o', 'address']: Dec 13 01:26:04.444190 waagent[2019]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Dec 13 01:26:04.444190 waagent[2019]: 2: eth0 inet 10.200.20.34/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Dec 13 01:26:04.444190 waagent[2019]: Executing ['ip', '-6', '-a', '-o', 'address']: Dec 13 01:26:04.444190 waagent[2019]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Dec 13 01:26:04.444190 waagent[2019]: 2: eth0 inet6 fe80::20d:3aff:fef5:8f33/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Dec 13 01:26:04.444190 waagent[2019]: 3: enP7672s1 inet6 fe80::20d:3aff:fef5:8f33/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Dec 13 01:26:04.485112 waagent[2019]: 2024-12-13T01:26:04.484320Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Dec 13 01:26:04.485112 waagent[2019]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 13 01:26:04.485112 waagent[2019]: pkts bytes target prot opt in out source destination Dec 13 01:26:04.485112 waagent[2019]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 13 01:26:04.485112 waagent[2019]: pkts bytes target prot opt in out source destination Dec 13 01:26:04.485112 waagent[2019]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Dec 13 01:26:04.485112 waagent[2019]: pkts bytes target prot opt in out source destination Dec 13 01:26:04.485112 waagent[2019]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Dec 13 01:26:04.485112 waagent[2019]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 13 01:26:04.485112 waagent[2019]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 13 01:26:04.487100 waagent[2019]: 2024-12-13T01:26:04.487058Z INFO EnvHandler ExtHandler Current Firewall rules: Dec 13 01:26:04.487100 waagent[2019]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 13 01:26:04.487100 waagent[2019]: pkts bytes target prot opt in out source destination Dec 13 01:26:04.487100 waagent[2019]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 13 01:26:04.487100 waagent[2019]: pkts bytes target prot opt in out source destination Dec 13 01:26:04.487100 waagent[2019]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Dec 13 01:26:04.487100 waagent[2019]: pkts bytes target prot opt in out source destination Dec 13 01:26:04.487100 waagent[2019]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Dec 13 01:26:04.487100 waagent[2019]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 13 01:26:04.487100 waagent[2019]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 13 01:26:04.487594 waagent[2019]: 2024-12-13T01:26:04.487564Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Dec 13 01:26:10.756512 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 01:26:10.767338 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:26:10.860345 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:26:10.874546 (kubelet)[2263]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:26:10.945483 kubelet[2263]: E1213 01:26:10.945406 2263 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:26:10.948397 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:26:10.948547 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:26:21.018497 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 01:26:21.027349 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:26:21.122922 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:26:21.125937 (kubelet)[2285]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:26:21.171578 kubelet[2285]: E1213 01:26:21.171514 2285 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:26:21.174073 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:26:21.174232 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:26:22.631973 chronyd[1758]: Selected source PHC0 Dec 13 01:26:31.268552 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 13 01:26:31.278352 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:26:31.407833 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:26:31.411579 (kubelet)[2306]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:26:31.455994 kubelet[2306]: E1213 01:26:31.455913 2306 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:26:31.459329 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:26:31.459499 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:26:33.532903 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 01:26:33.539400 systemd[1]: Started sshd@0-10.200.20.34:22-10.200.16.10:53044.service - OpenSSH per-connection server daemon (10.200.16.10:53044). Dec 13 01:26:34.020018 sshd[2315]: Accepted publickey for core from 10.200.16.10 port 53044 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:26:34.021325 sshd[2315]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:26:34.025517 systemd-logind[1787]: New session 3 of user core. Dec 13 01:26:34.031544 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 01:26:34.420620 systemd[1]: Started sshd@1-10.200.20.34:22-10.200.16.10:53056.service - OpenSSH per-connection server daemon (10.200.16.10:53056). Dec 13 01:26:34.850824 sshd[2320]: Accepted publickey for core from 10.200.16.10 port 53056 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:26:34.852192 sshd[2320]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:26:34.855759 systemd-logind[1787]: New session 4 of user core. Dec 13 01:26:34.862503 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 01:26:35.183374 sshd[2320]: pam_unix(sshd:session): session closed for user core Dec 13 01:26:35.186266 systemd[1]: sshd@1-10.200.20.34:22-10.200.16.10:53056.service: Deactivated successfully. Dec 13 01:26:35.189541 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 01:26:35.190352 systemd-logind[1787]: Session 4 logged out. Waiting for processes to exit. Dec 13 01:26:35.191764 systemd-logind[1787]: Removed session 4. Dec 13 01:26:35.258385 systemd[1]: Started sshd@2-10.200.20.34:22-10.200.16.10:53068.service - OpenSSH per-connection server daemon (10.200.16.10:53068). Dec 13 01:26:35.684730 sshd[2328]: Accepted publickey for core from 10.200.16.10 port 53068 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:26:35.685997 sshd[2328]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:26:35.690956 systemd-logind[1787]: New session 5 of user core. Dec 13 01:26:35.697422 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 01:26:35.996363 sshd[2328]: pam_unix(sshd:session): session closed for user core Dec 13 01:26:35.999190 systemd[1]: sshd@2-10.200.20.34:22-10.200.16.10:53068.service: Deactivated successfully. Dec 13 01:26:36.002851 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 01:26:36.003828 systemd-logind[1787]: Session 5 logged out. Waiting for processes to exit. Dec 13 01:26:36.005059 systemd-logind[1787]: Removed session 5. Dec 13 01:26:36.078812 systemd[1]: Started sshd@3-10.200.20.34:22-10.200.16.10:53084.service - OpenSSH per-connection server daemon (10.200.16.10:53084). Dec 13 01:26:36.520151 sshd[2336]: Accepted publickey for core from 10.200.16.10 port 53084 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:26:36.521445 sshd[2336]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:26:36.525147 systemd-logind[1787]: New session 6 of user core. Dec 13 01:26:36.533395 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 01:26:36.860356 sshd[2336]: pam_unix(sshd:session): session closed for user core Dec 13 01:26:36.862791 systemd[1]: sshd@3-10.200.20.34:22-10.200.16.10:53084.service: Deactivated successfully. Dec 13 01:26:36.866687 systemd-logind[1787]: Session 6 logged out. Waiting for processes to exit. Dec 13 01:26:36.867415 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 01:26:36.868700 systemd-logind[1787]: Removed session 6. Dec 13 01:26:36.937393 systemd[1]: Started sshd@4-10.200.20.34:22-10.200.16.10:53086.service - OpenSSH per-connection server daemon (10.200.16.10:53086). Dec 13 01:26:37.379411 sshd[2344]: Accepted publickey for core from 10.200.16.10 port 53086 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:26:37.380684 sshd[2344]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:26:37.384605 systemd-logind[1787]: New session 7 of user core. Dec 13 01:26:37.395478 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 01:26:37.742831 sudo[2348]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 01:26:37.743091 sudo[2348]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:26:37.768926 sudo[2348]: pam_unix(sudo:session): session closed for user root Dec 13 01:26:37.853143 sshd[2344]: pam_unix(sshd:session): session closed for user core Dec 13 01:26:37.856374 systemd[1]: sshd@4-10.200.20.34:22-10.200.16.10:53086.service: Deactivated successfully. Dec 13 01:26:37.860215 systemd-logind[1787]: Session 7 logged out. Waiting for processes to exit. Dec 13 01:26:37.860461 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 01:26:37.861613 systemd-logind[1787]: Removed session 7. Dec 13 01:26:37.929388 systemd[1]: Started sshd@5-10.200.20.34:22-10.200.16.10:53100.service - OpenSSH per-connection server daemon (10.200.16.10:53100). Dec 13 01:26:38.351438 sshd[2353]: Accepted publickey for core from 10.200.16.10 port 53100 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:26:38.352736 sshd[2353]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:26:38.357394 systemd-logind[1787]: New session 8 of user core. Dec 13 01:26:38.364463 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 01:26:38.596188 sudo[2358]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 01:26:38.596454 sudo[2358]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:26:38.599490 sudo[2358]: pam_unix(sudo:session): session closed for user root Dec 13 01:26:38.603961 sudo[2357]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 13 01:26:38.604482 sudo[2357]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:26:38.625503 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Dec 13 01:26:38.626900 auditctl[2361]: No rules Dec 13 01:26:38.629415 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 01:26:38.629668 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Dec 13 01:26:38.631604 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:26:38.655113 augenrules[2380]: No rules Dec 13 01:26:38.656795 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:26:38.658299 sudo[2357]: pam_unix(sudo:session): session closed for user root Dec 13 01:26:38.737700 sshd[2353]: pam_unix(sshd:session): session closed for user core Dec 13 01:26:38.742197 systemd-logind[1787]: Session 8 logged out. Waiting for processes to exit. Dec 13 01:26:38.742580 systemd[1]: sshd@5-10.200.20.34:22-10.200.16.10:53100.service: Deactivated successfully. Dec 13 01:26:38.745075 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 01:26:38.745947 systemd-logind[1787]: Removed session 8. Dec 13 01:26:38.816390 systemd[1]: Started sshd@6-10.200.20.34:22-10.200.16.10:52620.service - OpenSSH per-connection server daemon (10.200.16.10:52620). Dec 13 01:26:39.242044 sshd[2389]: Accepted publickey for core from 10.200.16.10 port 52620 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:26:39.243448 sshd[2389]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:26:39.248438 systemd-logind[1787]: New session 9 of user core. Dec 13 01:26:39.259488 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 01:26:39.488709 sudo[2393]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 01:26:39.488982 sudo[2393]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:26:40.503435 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 01:26:40.503579 (dockerd)[2408]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 01:26:41.106976 dockerd[2408]: time="2024-12-13T01:26:41.106914782Z" level=info msg="Starting up" Dec 13 01:26:41.518266 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 13 01:26:41.523396 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:26:41.559828 dockerd[2408]: time="2024-12-13T01:26:41.559769111Z" level=info msg="Loading containers: start." Dec 13 01:26:41.775758 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:26:41.779305 (kubelet)[2443]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:26:41.827401 kubelet[2443]: E1213 01:26:41.827329 2443 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:26:41.903125 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Dec 13 01:26:41.829928 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:26:41.830103 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:26:41.955350 kernel: Initializing XFRM netlink socket Dec 13 01:26:42.086514 systemd-networkd[1382]: docker0: Link UP Dec 13 01:26:42.107652 dockerd[2408]: time="2024-12-13T01:26:42.107317225Z" level=info msg="Loading containers: done." Dec 13 01:26:42.126746 dockerd[2408]: time="2024-12-13T01:26:42.126323342Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 01:26:42.126746 dockerd[2408]: time="2024-12-13T01:26:42.126445902Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Dec 13 01:26:42.126746 dockerd[2408]: time="2024-12-13T01:26:42.126577622Z" level=info msg="Daemon has completed initialization" Dec 13 01:26:42.174292 dockerd[2408]: time="2024-12-13T01:26:42.174234996Z" level=info msg="API listen on /run/docker.sock" Dec 13 01:26:42.174696 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 01:26:43.859903 containerd[1811]: time="2024-12-13T01:26:43.859827023Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Dec 13 01:26:44.004249 update_engine[1790]: I20241213 01:26:44.004189 1790 update_attempter.cc:509] Updating boot flags... Dec 13 01:26:44.074195 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 46 scanned by (udev-worker) (2582) Dec 13 01:26:44.810239 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2205655615.mount: Deactivated successfully. Dec 13 01:26:46.925543 containerd[1811]: time="2024-12-13T01:26:46.925489480Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:26:46.927386 containerd[1811]: time="2024-12-13T01:26:46.927186883Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.12: active requests=0, bytes read=32201250" Dec 13 01:26:46.929344 containerd[1811]: time="2024-12-13T01:26:46.929297606Z" level=info msg="ImageCreate event name:\"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:26:46.933269 containerd[1811]: time="2024-12-13T01:26:46.933221172Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:26:46.934689 containerd[1811]: time="2024-12-13T01:26:46.934486614Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.12\" with image id \"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\", size \"32198050\" in 3.074614351s" Dec 13 01:26:46.934689 containerd[1811]: time="2024-12-13T01:26:46.934525134Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\"" Dec 13 01:26:46.953575 containerd[1811]: time="2024-12-13T01:26:46.953188283Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Dec 13 01:26:49.426091 containerd[1811]: time="2024-12-13T01:26:49.426035443Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:26:49.428802 containerd[1811]: time="2024-12-13T01:26:49.428589447Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.12: active requests=0, bytes read=29381297" Dec 13 01:26:49.431608 containerd[1811]: time="2024-12-13T01:26:49.431575172Z" level=info msg="ImageCreate event name:\"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:26:49.437155 containerd[1811]: time="2024-12-13T01:26:49.437120300Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:26:49.438281 containerd[1811]: time="2024-12-13T01:26:49.438230862Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.12\" with image id \"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\", size \"30783618\" in 2.485010339s" Dec 13 01:26:49.438281 containerd[1811]: time="2024-12-13T01:26:49.438279742Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\"" Dec 13 01:26:49.457239 containerd[1811]: time="2024-12-13T01:26:49.457201651Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Dec 13 01:26:51.035203 containerd[1811]: time="2024-12-13T01:26:51.034542261Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:26:51.037652 containerd[1811]: time="2024-12-13T01:26:51.037486905Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.12: active requests=0, bytes read=15765640" Dec 13 01:26:51.041967 containerd[1811]: time="2024-12-13T01:26:51.041923032Z" level=info msg="ImageCreate event name:\"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:26:51.048109 containerd[1811]: time="2024-12-13T01:26:51.048062122Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:26:51.049088 containerd[1811]: time="2024-12-13T01:26:51.049049443Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.12\" with image id \"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\", size \"17167979\" in 1.591806752s" Dec 13 01:26:51.049088 containerd[1811]: time="2024-12-13T01:26:51.049086403Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\"" Dec 13 01:26:51.070193 containerd[1811]: time="2024-12-13T01:26:51.070013956Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 01:26:52.018291 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Dec 13 01:26:52.026333 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:26:52.174930 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:26:52.177851 (kubelet)[2696]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:26:52.216473 kubelet[2696]: E1213 01:26:52.216393 2696 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:26:52.218553 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:26:52.218693 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:26:52.888451 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2183013448.mount: Deactivated successfully. Dec 13 01:26:53.478209 containerd[1811]: time="2024-12-13T01:26:53.477884774Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:26:53.480481 containerd[1811]: time="2024-12-13T01:26:53.480335058Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=25273977" Dec 13 01:26:53.482595 containerd[1811]: time="2024-12-13T01:26:53.482549262Z" level=info msg="ImageCreate event name:\"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:26:53.486539 containerd[1811]: time="2024-12-13T01:26:53.486487108Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:26:53.487509 containerd[1811]: time="2024-12-13T01:26:53.487023709Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"25272996\" in 2.416975593s" Dec 13 01:26:53.487509 containerd[1811]: time="2024-12-13T01:26:53.487061709Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\"" Dec 13 01:26:53.506473 containerd[1811]: time="2024-12-13T01:26:53.506430139Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 01:26:54.196502 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1094744369.mount: Deactivated successfully. Dec 13 01:26:55.434093 containerd[1811]: time="2024-12-13T01:26:55.434031885Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:26:55.435990 containerd[1811]: time="2024-12-13T01:26:55.435723887Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Dec 13 01:26:55.438009 containerd[1811]: time="2024-12-13T01:26:55.437959050Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:26:55.441829 containerd[1811]: time="2024-12-13T01:26:55.441778816Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:26:55.443086 containerd[1811]: time="2024-12-13T01:26:55.442949098Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.936476439s" Dec 13 01:26:55.443086 containerd[1811]: time="2024-12-13T01:26:55.442983818Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Dec 13 01:26:55.463433 containerd[1811]: time="2024-12-13T01:26:55.463387007Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 01:26:56.106881 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount173063635.mount: Deactivated successfully. Dec 13 01:26:56.125047 containerd[1811]: time="2024-12-13T01:26:56.124276317Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:26:56.125988 containerd[1811]: time="2024-12-13T01:26:56.125933960Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Dec 13 01:26:56.130334 containerd[1811]: time="2024-12-13T01:26:56.130275686Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:26:56.135060 containerd[1811]: time="2024-12-13T01:26:56.135028813Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:26:56.135910 containerd[1811]: time="2024-12-13T01:26:56.135781014Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 672.352767ms" Dec 13 01:26:56.135910 containerd[1811]: time="2024-12-13T01:26:56.135816974Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Dec 13 01:26:56.155388 containerd[1811]: time="2024-12-13T01:26:56.155334042Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Dec 13 01:26:56.803473 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2361258717.mount: Deactivated successfully. Dec 13 01:27:01.285209 containerd[1811]: time="2024-12-13T01:27:01.284742616Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:01.287546 containerd[1811]: time="2024-12-13T01:27:01.287502020Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200786" Dec 13 01:27:01.291031 containerd[1811]: time="2024-12-13T01:27:01.290971545Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:01.295136 containerd[1811]: time="2024-12-13T01:27:01.295078231Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:01.296400 containerd[1811]: time="2024-12-13T01:27:01.296238873Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 5.140865111s" Dec 13 01:27:01.296400 containerd[1811]: time="2024-12-13T01:27:01.296277353Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Dec 13 01:27:02.268641 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Dec 13 01:27:02.277617 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:27:02.393312 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:27:02.405494 (kubelet)[2893]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:27:02.453220 kubelet[2893]: E1213 01:27:02.453152 2893 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:27:02.458306 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:27:02.458516 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:27:07.771278 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:27:07.782437 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:27:07.800929 systemd[1]: Reloading requested from client PID 2909 ('systemctl') (unit session-9.scope)... Dec 13 01:27:07.800952 systemd[1]: Reloading... Dec 13 01:27:07.887202 zram_generator::config[2952]: No configuration found. Dec 13 01:27:08.000849 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:27:08.075406 systemd[1]: Reloading finished in 274 ms. Dec 13 01:27:08.112576 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 01:27:08.112644 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 01:27:08.113014 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:27:08.118434 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:27:08.260551 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:27:08.264468 (kubelet)[3028]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:27:08.306874 kubelet[3028]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:27:08.306874 kubelet[3028]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:27:08.306874 kubelet[3028]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:27:08.307246 kubelet[3028]: I1213 01:27:08.306920 3028 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:27:08.949592 kubelet[3028]: I1213 01:27:08.949555 3028 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 01:27:08.949592 kubelet[3028]: I1213 01:27:08.949582 3028 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:27:08.949807 kubelet[3028]: I1213 01:27:08.949786 3028 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 01:27:08.964364 kubelet[3028]: E1213 01:27:08.964248 3028 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.34:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.34:6443: connect: connection refused Dec 13 01:27:08.964364 kubelet[3028]: I1213 01:27:08.964294 3028 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:27:08.976150 kubelet[3028]: I1213 01:27:08.975408 3028 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:27:08.976150 kubelet[3028]: I1213 01:27:08.975747 3028 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:27:08.976150 kubelet[3028]: I1213 01:27:08.975907 3028 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:27:08.976150 kubelet[3028]: I1213 01:27:08.975924 3028 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:27:08.976150 kubelet[3028]: I1213 01:27:08.975933 3028 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:27:08.976150 kubelet[3028]: I1213 01:27:08.976048 3028 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:27:08.978321 kubelet[3028]: I1213 01:27:08.978303 3028 kubelet.go:396] "Attempting to sync node with API server" Dec 13 01:27:08.978411 kubelet[3028]: I1213 01:27:08.978402 3028 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:27:08.978477 kubelet[3028]: I1213 01:27:08.978469 3028 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:27:08.978535 kubelet[3028]: I1213 01:27:08.978527 3028 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:27:08.980530 kubelet[3028]: W1213 01:27:08.980452 3028 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.200.20.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.1-a-a2790899e3&limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Dec 13 01:27:08.980530 kubelet[3028]: E1213 01:27:08.980532 3028 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.1-a-a2790899e3&limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Dec 13 01:27:08.980645 kubelet[3028]: I1213 01:27:08.980624 3028 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:27:08.980904 kubelet[3028]: I1213 01:27:08.980875 3028 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:27:08.980943 kubelet[3028]: W1213 01:27:08.980922 3028 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 01:27:08.981415 kubelet[3028]: I1213 01:27:08.981389 3028 server.go:1256] "Started kubelet" Dec 13 01:27:08.983305 kubelet[3028]: I1213 01:27:08.983274 3028 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:27:08.985508 kubelet[3028]: W1213 01:27:08.985475 3028 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.200.20.34:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Dec 13 01:27:08.985615 kubelet[3028]: E1213 01:27:08.985604 3028 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.34:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Dec 13 01:27:08.986681 kubelet[3028]: I1213 01:27:08.986665 3028 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:27:08.988764 kubelet[3028]: I1213 01:27:08.987432 3028 server.go:461] "Adding debug handlers to kubelet server" Dec 13 01:27:08.988764 kubelet[3028]: I1213 01:27:08.988295 3028 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:27:08.988764 kubelet[3028]: I1213 01:27:08.988376 3028 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:27:08.988764 kubelet[3028]: I1213 01:27:08.988547 3028 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:27:08.990506 kubelet[3028]: I1213 01:27:08.990484 3028 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 01:27:08.991644 kubelet[3028]: I1213 01:27:08.991616 3028 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 01:27:08.993129 kubelet[3028]: E1213 01:27:08.993107 3028 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.34:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.34:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.2.1-a-a2790899e3.181098396f0db7be default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.2.1-a-a2790899e3,UID:ci-4081.2.1-a-a2790899e3,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.2.1-a-a2790899e3,},FirstTimestamp:2024-12-13 01:27:08.981368766 +0000 UTC m=+0.713523890,LastTimestamp:2024-12-13 01:27:08.981368766 +0000 UTC m=+0.713523890,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.2.1-a-a2790899e3,}" Dec 13 01:27:08.993374 kubelet[3028]: E1213 01:27:08.993359 3028 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.1-a-a2790899e3?timeout=10s\": dial tcp 10.200.20.34:6443: connect: connection refused" interval="200ms" Dec 13 01:27:08.993617 kubelet[3028]: I1213 01:27:08.993601 3028 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:27:08.993739 kubelet[3028]: I1213 01:27:08.993724 3028 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:27:08.998629 kubelet[3028]: I1213 01:27:08.998607 3028 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:27:09.004289 kubelet[3028]: I1213 01:27:09.004267 3028 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:27:09.005151 kubelet[3028]: I1213 01:27:09.005114 3028 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:27:09.005151 kubelet[3028]: I1213 01:27:09.005143 3028 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:27:09.005151 kubelet[3028]: I1213 01:27:09.005175 3028 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 01:27:09.005305 kubelet[3028]: E1213 01:27:09.005224 3028 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:27:09.009647 kubelet[3028]: W1213 01:27:09.009594 3028 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.200.20.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Dec 13 01:27:09.009647 kubelet[3028]: E1213 01:27:09.009649 3028 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Dec 13 01:27:09.009943 kubelet[3028]: W1213 01:27:09.009902 3028 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.200.20.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Dec 13 01:27:09.009943 kubelet[3028]: E1213 01:27:09.009943 3028 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Dec 13 01:27:09.017375 kubelet[3028]: E1213 01:27:09.017353 3028 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:27:09.106060 kubelet[3028]: E1213 01:27:09.106034 3028 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 01:27:09.107801 kubelet[3028]: I1213 01:27:09.107772 3028 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.1-a-a2790899e3" Dec 13 01:27:09.108138 kubelet[3028]: E1213 01:27:09.108110 3028 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.34:6443/api/v1/nodes\": dial tcp 10.200.20.34:6443: connect: connection refused" node="ci-4081.2.1-a-a2790899e3" Dec 13 01:27:09.108732 kubelet[3028]: I1213 01:27:09.108710 3028 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:27:09.108732 kubelet[3028]: I1213 01:27:09.108728 3028 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:27:09.108883 kubelet[3028]: I1213 01:27:09.108748 3028 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:27:09.113232 kubelet[3028]: I1213 01:27:09.113211 3028 policy_none.go:49] "None policy: Start" Dec 13 01:27:09.113932 kubelet[3028]: I1213 01:27:09.113910 3028 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:27:09.113990 kubelet[3028]: I1213 01:27:09.113952 3028 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:27:09.122179 kubelet[3028]: I1213 01:27:09.121013 3028 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:27:09.122179 kubelet[3028]: I1213 01:27:09.121263 3028 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:27:09.128346 kubelet[3028]: E1213 01:27:09.128300 3028 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.2.1-a-a2790899e3\" not found" Dec 13 01:27:09.194201 kubelet[3028]: E1213 01:27:09.194143 3028 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.1-a-a2790899e3?timeout=10s\": dial tcp 10.200.20.34:6443: connect: connection refused" interval="400ms" Dec 13 01:27:09.306675 kubelet[3028]: I1213 01:27:09.306578 3028 topology_manager.go:215] "Topology Admit Handler" podUID="4c34ef21ba821365bcd957250b09e94f" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.2.1-a-a2790899e3" Dec 13 01:27:09.308239 kubelet[3028]: I1213 01:27:09.308091 3028 topology_manager.go:215] "Topology Admit Handler" podUID="f5a8e6da17863be7e7a7377ff48c969c" podNamespace="kube-system" podName="kube-scheduler-ci-4081.2.1-a-a2790899e3" Dec 13 01:27:09.310342 kubelet[3028]: I1213 01:27:09.310131 3028 topology_manager.go:215] "Topology Admit Handler" podUID="e436b1a5337e9e8f72f336a6c9a21d76" podNamespace="kube-system" podName="kube-apiserver-ci-4081.2.1-a-a2790899e3" Dec 13 01:27:09.310342 kubelet[3028]: I1213 01:27:09.310234 3028 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.1-a-a2790899e3" Dec 13 01:27:09.311092 kubelet[3028]: E1213 01:27:09.311062 3028 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.34:6443/api/v1/nodes\": dial tcp 10.200.20.34:6443: connect: connection refused" node="ci-4081.2.1-a-a2790899e3" Dec 13 01:27:09.393751 kubelet[3028]: I1213 01:27:09.393717 3028 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e436b1a5337e9e8f72f336a6c9a21d76-ca-certs\") pod \"kube-apiserver-ci-4081.2.1-a-a2790899e3\" (UID: \"e436b1a5337e9e8f72f336a6c9a21d76\") " pod="kube-system/kube-apiserver-ci-4081.2.1-a-a2790899e3" Dec 13 01:27:09.393751 kubelet[3028]: I1213 01:27:09.393760 3028 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e436b1a5337e9e8f72f336a6c9a21d76-k8s-certs\") pod \"kube-apiserver-ci-4081.2.1-a-a2790899e3\" (UID: \"e436b1a5337e9e8f72f336a6c9a21d76\") " pod="kube-system/kube-apiserver-ci-4081.2.1-a-a2790899e3" Dec 13 01:27:09.393890 kubelet[3028]: I1213 01:27:09.393797 3028 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4c34ef21ba821365bcd957250b09e94f-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.2.1-a-a2790899e3\" (UID: \"4c34ef21ba821365bcd957250b09e94f\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-a-a2790899e3" Dec 13 01:27:09.393890 kubelet[3028]: I1213 01:27:09.393820 3028 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4c34ef21ba821365bcd957250b09e94f-k8s-certs\") pod \"kube-controller-manager-ci-4081.2.1-a-a2790899e3\" (UID: \"4c34ef21ba821365bcd957250b09e94f\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-a-a2790899e3" Dec 13 01:27:09.393890 kubelet[3028]: I1213 01:27:09.393842 3028 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4c34ef21ba821365bcd957250b09e94f-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.2.1-a-a2790899e3\" (UID: \"4c34ef21ba821365bcd957250b09e94f\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-a-a2790899e3" Dec 13 01:27:09.393890 kubelet[3028]: I1213 01:27:09.393873 3028 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e436b1a5337e9e8f72f336a6c9a21d76-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.2.1-a-a2790899e3\" (UID: \"e436b1a5337e9e8f72f336a6c9a21d76\") " pod="kube-system/kube-apiserver-ci-4081.2.1-a-a2790899e3" Dec 13 01:27:09.393980 kubelet[3028]: I1213 01:27:09.393893 3028 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4c34ef21ba821365bcd957250b09e94f-ca-certs\") pod \"kube-controller-manager-ci-4081.2.1-a-a2790899e3\" (UID: \"4c34ef21ba821365bcd957250b09e94f\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-a-a2790899e3" Dec 13 01:27:09.393980 kubelet[3028]: I1213 01:27:09.393916 3028 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4c34ef21ba821365bcd957250b09e94f-kubeconfig\") pod \"kube-controller-manager-ci-4081.2.1-a-a2790899e3\" (UID: \"4c34ef21ba821365bcd957250b09e94f\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-a-a2790899e3" Dec 13 01:27:09.393980 kubelet[3028]: I1213 01:27:09.393948 3028 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f5a8e6da17863be7e7a7377ff48c969c-kubeconfig\") pod \"kube-scheduler-ci-4081.2.1-a-a2790899e3\" (UID: \"f5a8e6da17863be7e7a7377ff48c969c\") " pod="kube-system/kube-scheduler-ci-4081.2.1-a-a2790899e3" Dec 13 01:27:09.595809 kubelet[3028]: E1213 01:27:09.595715 3028 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.1-a-a2790899e3?timeout=10s\": dial tcp 10.200.20.34:6443: connect: connection refused" interval="800ms" Dec 13 01:27:09.615555 containerd[1811]: time="2024-12-13T01:27:09.615502178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.2.1-a-a2790899e3,Uid:f5a8e6da17863be7e7a7377ff48c969c,Namespace:kube-system,Attempt:0,}" Dec 13 01:27:09.616052 containerd[1811]: time="2024-12-13T01:27:09.615909339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.2.1-a-a2790899e3,Uid:4c34ef21ba821365bcd957250b09e94f,Namespace:kube-system,Attempt:0,}" Dec 13 01:27:09.620395 containerd[1811]: time="2024-12-13T01:27:09.620259065Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.2.1-a-a2790899e3,Uid:e436b1a5337e9e8f72f336a6c9a21d76,Namespace:kube-system,Attempt:0,}" Dec 13 01:27:09.713617 kubelet[3028]: I1213 01:27:09.713564 3028 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.1-a-a2790899e3" Dec 13 01:27:09.713994 kubelet[3028]: E1213 01:27:09.713976 3028 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.34:6443/api/v1/nodes\": dial tcp 10.200.20.34:6443: connect: connection refused" node="ci-4081.2.1-a-a2790899e3" Dec 13 01:27:10.266934 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1929178855.mount: Deactivated successfully. Dec 13 01:27:10.297926 kubelet[3028]: W1213 01:27:10.297871 3028 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.200.20.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Dec 13 01:27:10.297926 kubelet[3028]: E1213 01:27:10.297930 3028 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Dec 13 01:27:10.300209 kubelet[3028]: W1213 01:27:10.300155 3028 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.200.20.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.1-a-a2790899e3&limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Dec 13 01:27:10.300252 kubelet[3028]: E1213 01:27:10.300216 3028 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.1-a-a2790899e3&limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Dec 13 01:27:10.302294 containerd[1811]: time="2024-12-13T01:27:10.302252188Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:27:10.305470 containerd[1811]: time="2024-12-13T01:27:10.305433192Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Dec 13 01:27:10.307477 containerd[1811]: time="2024-12-13T01:27:10.307438275Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:27:10.311955 containerd[1811]: time="2024-12-13T01:27:10.311247241Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:27:10.313194 containerd[1811]: time="2024-12-13T01:27:10.313123244Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:27:10.317621 containerd[1811]: time="2024-12-13T01:27:10.317581730Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:27:10.320022 containerd[1811]: time="2024-12-13T01:27:10.319968454Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:27:10.324266 containerd[1811]: time="2024-12-13T01:27:10.324233900Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:27:10.325263 containerd[1811]: time="2024-12-13T01:27:10.325018021Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 709.047322ms" Dec 13 01:27:10.326712 containerd[1811]: time="2024-12-13T01:27:10.326679824Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 711.099926ms" Dec 13 01:27:10.330008 containerd[1811]: time="2024-12-13T01:27:10.329964268Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 709.647443ms" Dec 13 01:27:10.362736 kubelet[3028]: W1213 01:27:10.362674 3028 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.200.20.34:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Dec 13 01:27:10.362736 kubelet[3028]: E1213 01:27:10.362738 3028 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.34:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Dec 13 01:27:10.396634 kubelet[3028]: E1213 01:27:10.396603 3028 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.1-a-a2790899e3?timeout=10s\": dial tcp 10.200.20.34:6443: connect: connection refused" interval="1.6s" Dec 13 01:27:10.516003 kubelet[3028]: I1213 01:27:10.515978 3028 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.1-a-a2790899e3" Dec 13 01:27:10.516575 kubelet[3028]: E1213 01:27:10.516556 3028 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.34:6443/api/v1/nodes\": dial tcp 10.200.20.34:6443: connect: connection refused" node="ci-4081.2.1-a-a2790899e3" Dec 13 01:27:10.527974 kubelet[3028]: W1213 01:27:10.527939 3028 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.200.20.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Dec 13 01:27:10.527974 kubelet[3028]: E1213 01:27:10.527980 3028 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Dec 13 01:27:10.888155 containerd[1811]: time="2024-12-13T01:27:10.887929499Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:27:10.888477 containerd[1811]: time="2024-12-13T01:27:10.888148539Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:27:10.888477 containerd[1811]: time="2024-12-13T01:27:10.888231140Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:10.888897 containerd[1811]: time="2024-12-13T01:27:10.888372500Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:10.890200 containerd[1811]: time="2024-12-13T01:27:10.889882382Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:27:10.890200 containerd[1811]: time="2024-12-13T01:27:10.889934982Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:27:10.890200 containerd[1811]: time="2024-12-13T01:27:10.889950142Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:10.890200 containerd[1811]: time="2024-12-13T01:27:10.890031702Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:10.890941 containerd[1811]: time="2024-12-13T01:27:10.890816023Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:27:10.891379 containerd[1811]: time="2024-12-13T01:27:10.891325704Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:27:10.891585 containerd[1811]: time="2024-12-13T01:27:10.891545705Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:10.892105 containerd[1811]: time="2024-12-13T01:27:10.892050025Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:10.955345 containerd[1811]: time="2024-12-13T01:27:10.955309162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.2.1-a-a2790899e3,Uid:e436b1a5337e9e8f72f336a6c9a21d76,Namespace:kube-system,Attempt:0,} returns sandbox id \"9eff770f597c6a178ae7422aee9b2ad4e52619f389aed39c14061792cfefec5e\"" Dec 13 01:27:10.960886 containerd[1811]: time="2024-12-13T01:27:10.960837050Z" level=info msg="CreateContainer within sandbox \"9eff770f597c6a178ae7422aee9b2ad4e52619f389aed39c14061792cfefec5e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 01:27:10.964465 containerd[1811]: time="2024-12-13T01:27:10.964354536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.2.1-a-a2790899e3,Uid:f5a8e6da17863be7e7a7377ff48c969c,Namespace:kube-system,Attempt:0,} returns sandbox id \"cc2caa603c69f68d8ba3918887608c55c480514733c22c2e51d654a284f2bb1a\"" Dec 13 01:27:10.966722 containerd[1811]: time="2024-12-13T01:27:10.966683619Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.2.1-a-a2790899e3,Uid:4c34ef21ba821365bcd957250b09e94f,Namespace:kube-system,Attempt:0,} returns sandbox id \"8338d8d09d281003e1165432b9bbbcaa1dfa59173929093495d116b779d3ad8f\"" Dec 13 01:27:10.968834 containerd[1811]: time="2024-12-13T01:27:10.968445022Z" level=info msg="CreateContainer within sandbox \"cc2caa603c69f68d8ba3918887608c55c480514733c22c2e51d654a284f2bb1a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 01:27:10.971530 containerd[1811]: time="2024-12-13T01:27:10.971403506Z" level=info msg="CreateContainer within sandbox \"8338d8d09d281003e1165432b9bbbcaa1dfa59173929093495d116b779d3ad8f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 01:27:11.038581 containerd[1811]: time="2024-12-13T01:27:11.038534209Z" level=info msg="CreateContainer within sandbox \"9eff770f597c6a178ae7422aee9b2ad4e52619f389aed39c14061792cfefec5e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9fd4b61880df7eb9a2a0d26a23d4db851443e9b41a4225beb3740b46e9cf5260\"" Dec 13 01:27:11.039210 containerd[1811]: time="2024-12-13T01:27:11.039181490Z" level=info msg="StartContainer for \"9fd4b61880df7eb9a2a0d26a23d4db851443e9b41a4225beb3740b46e9cf5260\"" Dec 13 01:27:11.041016 containerd[1811]: time="2024-12-13T01:27:11.040933892Z" level=info msg="CreateContainer within sandbox \"8338d8d09d281003e1165432b9bbbcaa1dfa59173929093495d116b779d3ad8f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5a4282bbda48522c247aeb1f23778cf6e2faeef1d8eceefc22f1ce774af3195e\"" Dec 13 01:27:11.041418 containerd[1811]: time="2024-12-13T01:27:11.041341173Z" level=info msg="StartContainer for \"5a4282bbda48522c247aeb1f23778cf6e2faeef1d8eceefc22f1ce774af3195e\"" Dec 13 01:27:11.044974 containerd[1811]: time="2024-12-13T01:27:11.044886538Z" level=info msg="CreateContainer within sandbox \"cc2caa603c69f68d8ba3918887608c55c480514733c22c2e51d654a284f2bb1a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2ac04ade089fe91574003e20a6d2f20bd02b9650013f33343269ee9e60bebf5b\"" Dec 13 01:27:11.045884 containerd[1811]: time="2024-12-13T01:27:11.045358459Z" level=info msg="StartContainer for \"2ac04ade089fe91574003e20a6d2f20bd02b9650013f33343269ee9e60bebf5b\"" Dec 13 01:27:11.050887 kubelet[3028]: E1213 01:27:11.050845 3028 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.34:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.34:6443: connect: connection refused Dec 13 01:27:11.120615 containerd[1811]: time="2024-12-13T01:27:11.120572134Z" level=info msg="StartContainer for \"9fd4b61880df7eb9a2a0d26a23d4db851443e9b41a4225beb3740b46e9cf5260\" returns successfully" Dec 13 01:27:11.144923 containerd[1811]: time="2024-12-13T01:27:11.144798411Z" level=info msg="StartContainer for \"5a4282bbda48522c247aeb1f23778cf6e2faeef1d8eceefc22f1ce774af3195e\" returns successfully" Dec 13 01:27:11.154756 containerd[1811]: time="2024-12-13T01:27:11.154708186Z" level=info msg="StartContainer for \"2ac04ade089fe91574003e20a6d2f20bd02b9650013f33343269ee9e60bebf5b\" returns successfully" Dec 13 01:27:12.120278 kubelet[3028]: I1213 01:27:12.120231 3028 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.1-a-a2790899e3" Dec 13 01:27:12.933890 kubelet[3028]: E1213 01:27:12.933849 3028 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.2.1-a-a2790899e3\" not found" node="ci-4081.2.1-a-a2790899e3" Dec 13 01:27:12.948299 kubelet[3028]: I1213 01:27:12.948268 3028 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.2.1-a-a2790899e3" Dec 13 01:27:12.987412 kubelet[3028]: I1213 01:27:12.987380 3028 apiserver.go:52] "Watching apiserver" Dec 13 01:27:12.998490 kubelet[3028]: E1213 01:27:12.998454 3028 event.go:346] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081.2.1-a-a2790899e3.181098396f0db7be default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.2.1-a-a2790899e3,UID:ci-4081.2.1-a-a2790899e3,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.2.1-a-a2790899e3,},FirstTimestamp:2024-12-13 01:27:08.981368766 +0000 UTC m=+0.713523890,LastTimestamp:2024-12-13 01:27:08.981368766 +0000 UTC m=+0.713523890,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.2.1-a-a2790899e3,}" Dec 13 01:27:13.055177 kubelet[3028]: E1213 01:27:13.054033 3028 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081.2.1-a-a2790899e3\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.2.1-a-a2790899e3" Dec 13 01:27:13.055452 kubelet[3028]: E1213 01:27:13.055424 3028 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.2.1-a-a2790899e3\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.2.1-a-a2790899e3" Dec 13 01:27:13.055956 kubelet[3028]: E1213 01:27:13.055934 3028 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4081.2.1-a-a2790899e3\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.2.1-a-a2790899e3" Dec 13 01:27:13.093344 kubelet[3028]: I1213 01:27:13.093311 3028 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 01:27:14.056315 kubelet[3028]: W1213 01:27:14.056279 3028 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 01:27:14.060612 kubelet[3028]: W1213 01:27:14.060588 3028 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 01:27:15.882825 systemd[1]: Reloading requested from client PID 3298 ('systemctl') (unit session-9.scope)... Dec 13 01:27:15.882838 systemd[1]: Reloading... Dec 13 01:27:15.955277 zram_generator::config[3338]: No configuration found. Dec 13 01:27:16.074472 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:27:16.153230 systemd[1]: Reloading finished in 270 ms. Dec 13 01:27:16.178829 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:27:16.198466 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 01:27:16.198797 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:27:16.207665 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:27:16.420248 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:27:16.426319 (kubelet)[3412]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:27:16.497591 kubelet[3412]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:27:16.497591 kubelet[3412]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:27:16.497591 kubelet[3412]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:27:16.498790 kubelet[3412]: I1213 01:27:16.497717 3412 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:27:16.505053 kubelet[3412]: I1213 01:27:16.504069 3412 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 01:27:16.505053 kubelet[3412]: I1213 01:27:16.504096 3412 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:27:16.505285 kubelet[3412]: I1213 01:27:16.505115 3412 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 01:27:16.506786 kubelet[3412]: I1213 01:27:16.506747 3412 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 01:27:16.508881 kubelet[3412]: I1213 01:27:16.508797 3412 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:27:16.520918 kubelet[3412]: I1213 01:27:16.519959 3412 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:27:16.520918 kubelet[3412]: I1213 01:27:16.520606 3412 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:27:16.520918 kubelet[3412]: I1213 01:27:16.521086 3412 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:27:16.520918 kubelet[3412]: I1213 01:27:16.521132 3412 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:27:16.520918 kubelet[3412]: I1213 01:27:16.521143 3412 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:27:16.520918 kubelet[3412]: I1213 01:27:16.521204 3412 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:27:16.522079 kubelet[3412]: I1213 01:27:16.521309 3412 kubelet.go:396] "Attempting to sync node with API server" Dec 13 01:27:16.522079 kubelet[3412]: I1213 01:27:16.521322 3412 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:27:16.522079 kubelet[3412]: I1213 01:27:16.521339 3412 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:27:16.522079 kubelet[3412]: I1213 01:27:16.521349 3412 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:27:16.535184 kubelet[3412]: I1213 01:27:16.534267 3412 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:27:16.535184 kubelet[3412]: I1213 01:27:16.534506 3412 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:27:16.535184 kubelet[3412]: I1213 01:27:16.534964 3412 server.go:1256] "Started kubelet" Dec 13 01:27:16.562080 kubelet[3412]: I1213 01:27:16.560371 3412 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:27:16.566111 kubelet[3412]: I1213 01:27:16.566084 3412 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:27:16.567190 kubelet[3412]: I1213 01:27:16.566957 3412 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:27:16.567259 kubelet[3412]: I1213 01:27:16.567195 3412 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:27:16.576804 kubelet[3412]: I1213 01:27:16.574637 3412 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:27:16.576804 kubelet[3412]: I1213 01:27:16.576467 3412 server.go:461] "Adding debug handlers to kubelet server" Dec 13 01:27:16.577877 kubelet[3412]: I1213 01:27:16.577515 3412 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 01:27:16.577877 kubelet[3412]: I1213 01:27:16.577655 3412 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 01:27:16.579803 kubelet[3412]: I1213 01:27:16.579191 3412 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:27:16.580055 kubelet[3412]: I1213 01:27:16.580027 3412 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:27:16.580084 kubelet[3412]: I1213 01:27:16.580059 3412 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:27:16.580084 kubelet[3412]: I1213 01:27:16.580075 3412 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 01:27:16.580128 kubelet[3412]: E1213 01:27:16.580119 3412 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:27:16.595527 kubelet[3412]: I1213 01:27:16.595500 3412 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:27:16.595527 kubelet[3412]: I1213 01:27:16.595521 3412 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:27:16.595671 kubelet[3412]: I1213 01:27:16.595584 3412 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:27:16.596368 kubelet[3412]: E1213 01:27:16.596345 3412 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:27:16.651432 kubelet[3412]: I1213 01:27:16.651411 3412 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:27:16.651865 kubelet[3412]: I1213 01:27:16.651473 3412 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:27:16.651865 kubelet[3412]: I1213 01:27:16.651492 3412 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:27:16.651865 kubelet[3412]: I1213 01:27:16.651633 3412 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 01:27:16.651865 kubelet[3412]: I1213 01:27:16.651651 3412 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 01:27:16.651865 kubelet[3412]: I1213 01:27:16.651657 3412 policy_none.go:49] "None policy: Start" Dec 13 01:27:16.652691 kubelet[3412]: I1213 01:27:16.652416 3412 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:27:16.652691 kubelet[3412]: I1213 01:27:16.652439 3412 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:27:16.652691 kubelet[3412]: I1213 01:27:16.652611 3412 state_mem.go:75] "Updated machine memory state" Dec 13 01:27:16.653775 kubelet[3412]: I1213 01:27:16.653756 3412 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:27:16.653996 kubelet[3412]: I1213 01:27:16.653980 3412 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:27:16.677234 kubelet[3412]: I1213 01:27:16.677135 3412 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.1-a-a2790899e3" Dec 13 01:27:16.680398 kubelet[3412]: I1213 01:27:16.680367 3412 topology_manager.go:215] "Topology Admit Handler" podUID="e436b1a5337e9e8f72f336a6c9a21d76" podNamespace="kube-system" podName="kube-apiserver-ci-4081.2.1-a-a2790899e3" Dec 13 01:27:16.680661 kubelet[3412]: I1213 01:27:16.680638 3412 topology_manager.go:215] "Topology Admit Handler" podUID="4c34ef21ba821365bcd957250b09e94f" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.2.1-a-a2790899e3" Dec 13 01:27:16.682192 kubelet[3412]: I1213 01:27:16.681176 3412 topology_manager.go:215] "Topology Admit Handler" podUID="f5a8e6da17863be7e7a7377ff48c969c" podNamespace="kube-system" podName="kube-scheduler-ci-4081.2.1-a-a2790899e3" Dec 13 01:27:16.693049 kubelet[3412]: W1213 01:27:16.692732 3412 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 01:27:16.694318 kubelet[3412]: W1213 01:27:16.692813 3412 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 01:27:16.694318 kubelet[3412]: E1213 01:27:16.693111 3412 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.2.1-a-a2790899e3\" already exists" pod="kube-system/kube-apiserver-ci-4081.2.1-a-a2790899e3" Dec 13 01:27:16.694318 kubelet[3412]: I1213 01:27:16.693335 3412 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081.2.1-a-a2790899e3" Dec 13 01:27:16.694452 kubelet[3412]: I1213 01:27:16.694404 3412 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.2.1-a-a2790899e3" Dec 13 01:27:16.695371 kubelet[3412]: W1213 01:27:16.693365 3412 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 01:27:16.695371 kubelet[3412]: E1213 01:27:16.694751 3412 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081.2.1-a-a2790899e3\" already exists" pod="kube-system/kube-controller-manager-ci-4081.2.1-a-a2790899e3" Dec 13 01:27:16.878845 kubelet[3412]: I1213 01:27:16.878809 3412 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4c34ef21ba821365bcd957250b09e94f-ca-certs\") pod \"kube-controller-manager-ci-4081.2.1-a-a2790899e3\" (UID: \"4c34ef21ba821365bcd957250b09e94f\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-a-a2790899e3" Dec 13 01:27:16.878845 kubelet[3412]: I1213 01:27:16.878857 3412 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f5a8e6da17863be7e7a7377ff48c969c-kubeconfig\") pod \"kube-scheduler-ci-4081.2.1-a-a2790899e3\" (UID: \"f5a8e6da17863be7e7a7377ff48c969c\") " pod="kube-system/kube-scheduler-ci-4081.2.1-a-a2790899e3" Dec 13 01:27:16.879007 kubelet[3412]: I1213 01:27:16.878880 3412 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4c34ef21ba821365bcd957250b09e94f-kubeconfig\") pod \"kube-controller-manager-ci-4081.2.1-a-a2790899e3\" (UID: \"4c34ef21ba821365bcd957250b09e94f\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-a-a2790899e3" Dec 13 01:27:16.879007 kubelet[3412]: I1213 01:27:16.878905 3412 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4c34ef21ba821365bcd957250b09e94f-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.2.1-a-a2790899e3\" (UID: \"4c34ef21ba821365bcd957250b09e94f\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-a-a2790899e3" Dec 13 01:27:16.879007 kubelet[3412]: I1213 01:27:16.878926 3412 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e436b1a5337e9e8f72f336a6c9a21d76-ca-certs\") pod \"kube-apiserver-ci-4081.2.1-a-a2790899e3\" (UID: \"e436b1a5337e9e8f72f336a6c9a21d76\") " pod="kube-system/kube-apiserver-ci-4081.2.1-a-a2790899e3" Dec 13 01:27:16.879007 kubelet[3412]: I1213 01:27:16.878945 3412 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e436b1a5337e9e8f72f336a6c9a21d76-k8s-certs\") pod \"kube-apiserver-ci-4081.2.1-a-a2790899e3\" (UID: \"e436b1a5337e9e8f72f336a6c9a21d76\") " pod="kube-system/kube-apiserver-ci-4081.2.1-a-a2790899e3" Dec 13 01:27:16.879007 kubelet[3412]: I1213 01:27:16.878971 3412 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e436b1a5337e9e8f72f336a6c9a21d76-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.2.1-a-a2790899e3\" (UID: \"e436b1a5337e9e8f72f336a6c9a21d76\") " pod="kube-system/kube-apiserver-ci-4081.2.1-a-a2790899e3" Dec 13 01:27:16.879115 kubelet[3412]: I1213 01:27:16.878992 3412 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4c34ef21ba821365bcd957250b09e94f-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.2.1-a-a2790899e3\" (UID: \"4c34ef21ba821365bcd957250b09e94f\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-a-a2790899e3" Dec 13 01:27:16.879115 kubelet[3412]: I1213 01:27:16.879014 3412 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4c34ef21ba821365bcd957250b09e94f-k8s-certs\") pod \"kube-controller-manager-ci-4081.2.1-a-a2790899e3\" (UID: \"4c34ef21ba821365bcd957250b09e94f\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-a-a2790899e3" Dec 13 01:27:17.525202 kubelet[3412]: I1213 01:27:17.522560 3412 apiserver.go:52] "Watching apiserver" Dec 13 01:27:17.577950 kubelet[3412]: I1213 01:27:17.577908 3412 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 01:27:17.654192 kubelet[3412]: W1213 01:27:17.652734 3412 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 01:27:17.654192 kubelet[3412]: E1213 01:27:17.652800 3412 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.2.1-a-a2790899e3\" already exists" pod="kube-system/kube-apiserver-ci-4081.2.1-a-a2790899e3" Dec 13 01:27:17.748187 kubelet[3412]: I1213 01:27:17.744831 3412 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.2.1-a-a2790899e3" podStartSLOduration=3.744789598 podStartE2EDuration="3.744789598s" podCreationTimestamp="2024-12-13 01:27:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:27:17.712255508 +0000 UTC m=+1.281792716" watchObservedRunningTime="2024-12-13 01:27:17.744789598 +0000 UTC m=+1.314326806" Dec 13 01:27:17.759799 kubelet[3412]: I1213 01:27:17.759768 3412 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.2.1-a-a2790899e3" podStartSLOduration=1.75971978 podStartE2EDuration="1.75971978s" podCreationTimestamp="2024-12-13 01:27:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:27:17.744977998 +0000 UTC m=+1.314515206" watchObservedRunningTime="2024-12-13 01:27:17.75971978 +0000 UTC m=+1.329256988" Dec 13 01:27:17.779616 kubelet[3412]: I1213 01:27:17.779502 3412 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.2.1-a-a2790899e3" podStartSLOduration=3.779460531 podStartE2EDuration="3.779460531s" podCreationTimestamp="2024-12-13 01:27:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:27:17.760615222 +0000 UTC m=+1.330152430" watchObservedRunningTime="2024-12-13 01:27:17.779460531 +0000 UTC m=+1.348997779" Dec 13 01:27:21.459192 sudo[2393]: pam_unix(sudo:session): session closed for user root Dec 13 01:27:21.527224 sshd[2389]: pam_unix(sshd:session): session closed for user core Dec 13 01:27:21.531601 systemd-logind[1787]: Session 9 logged out. Waiting for processes to exit. Dec 13 01:27:21.531985 systemd[1]: sshd@6-10.200.20.34:22-10.200.16.10:52620.service: Deactivated successfully. Dec 13 01:27:21.534712 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 01:27:21.536147 systemd-logind[1787]: Removed session 9. Dec 13 01:27:29.792962 kubelet[3412]: I1213 01:27:29.790812 3412 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 01:27:29.795379 kubelet[3412]: I1213 01:27:29.793682 3412 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 01:27:29.795410 containerd[1811]: time="2024-12-13T01:27:29.793332409Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 01:27:30.564595 kubelet[3412]: I1213 01:27:30.564418 3412 topology_manager.go:215] "Topology Admit Handler" podUID="6f17967d-140a-484d-b79e-421efb1b4a1a" podNamespace="kube-system" podName="kube-proxy-zqf5k" Dec 13 01:27:30.692138 kubelet[3412]: I1213 01:27:30.692093 3412 topology_manager.go:215] "Topology Admit Handler" podUID="2bd2ac2d-fa79-44fb-9dd6-55786a22fc94" podNamespace="tigera-operator" podName="tigera-operator-c7ccbd65-f6mgv" Dec 13 01:27:30.757857 kubelet[3412]: I1213 01:27:30.757792 3412 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6f17967d-140a-484d-b79e-421efb1b4a1a-xtables-lock\") pod \"kube-proxy-zqf5k\" (UID: \"6f17967d-140a-484d-b79e-421efb1b4a1a\") " pod="kube-system/kube-proxy-zqf5k" Dec 13 01:27:30.757857 kubelet[3412]: I1213 01:27:30.757836 3412 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6f17967d-140a-484d-b79e-421efb1b4a1a-kube-proxy\") pod \"kube-proxy-zqf5k\" (UID: \"6f17967d-140a-484d-b79e-421efb1b4a1a\") " pod="kube-system/kube-proxy-zqf5k" Dec 13 01:27:30.758120 kubelet[3412]: I1213 01:27:30.757973 3412 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkbtk\" (UniqueName: \"kubernetes.io/projected/6f17967d-140a-484d-b79e-421efb1b4a1a-kube-api-access-jkbtk\") pod \"kube-proxy-zqf5k\" (UID: \"6f17967d-140a-484d-b79e-421efb1b4a1a\") " pod="kube-system/kube-proxy-zqf5k" Dec 13 01:27:30.758287 kubelet[3412]: I1213 01:27:30.758181 3412 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2bd2ac2d-fa79-44fb-9dd6-55786a22fc94-var-lib-calico\") pod \"tigera-operator-c7ccbd65-f6mgv\" (UID: \"2bd2ac2d-fa79-44fb-9dd6-55786a22fc94\") " pod="tigera-operator/tigera-operator-c7ccbd65-f6mgv" Dec 13 01:27:30.758287 kubelet[3412]: I1213 01:27:30.758220 3412 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bv8bk\" (UniqueName: \"kubernetes.io/projected/2bd2ac2d-fa79-44fb-9dd6-55786a22fc94-kube-api-access-bv8bk\") pod \"tigera-operator-c7ccbd65-f6mgv\" (UID: \"2bd2ac2d-fa79-44fb-9dd6-55786a22fc94\") " pod="tigera-operator/tigera-operator-c7ccbd65-f6mgv" Dec 13 01:27:30.758287 kubelet[3412]: I1213 01:27:30.758256 3412 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6f17967d-140a-484d-b79e-421efb1b4a1a-lib-modules\") pod \"kube-proxy-zqf5k\" (UID: \"6f17967d-140a-484d-b79e-421efb1b4a1a\") " pod="kube-system/kube-proxy-zqf5k" Dec 13 01:27:30.996562 containerd[1811]: time="2024-12-13T01:27:30.996438755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-f6mgv,Uid:2bd2ac2d-fa79-44fb-9dd6-55786a22fc94,Namespace:tigera-operator,Attempt:0,}" Dec 13 01:27:31.036821 containerd[1811]: time="2024-12-13T01:27:31.036456573Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:27:31.036821 containerd[1811]: time="2024-12-13T01:27:31.036504173Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:27:31.036821 containerd[1811]: time="2024-12-13T01:27:31.036514813Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:31.036821 containerd[1811]: time="2024-12-13T01:27:31.036586493Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:31.075045 containerd[1811]: time="2024-12-13T01:27:31.075003549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-f6mgv,Uid:2bd2ac2d-fa79-44fb-9dd6-55786a22fc94,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"b8e1d0493b9290e096eab000e18fe92752a16ab5272c37b207c5e482aa6aded0\"" Dec 13 01:27:31.077439 containerd[1811]: time="2024-12-13T01:27:31.077360993Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Dec 13 01:27:31.167772 containerd[1811]: time="2024-12-13T01:27:31.167728284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zqf5k,Uid:6f17967d-140a-484d-b79e-421efb1b4a1a,Namespace:kube-system,Attempt:0,}" Dec 13 01:27:31.958882 containerd[1811]: time="2024-12-13T01:27:31.958794192Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:27:31.959294 containerd[1811]: time="2024-12-13T01:27:31.958955912Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:27:31.959294 containerd[1811]: time="2024-12-13T01:27:31.958996592Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:31.959382 containerd[1811]: time="2024-12-13T01:27:31.959260552Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:31.992096 containerd[1811]: time="2024-12-13T01:27:31.992060520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zqf5k,Uid:6f17967d-140a-484d-b79e-421efb1b4a1a,Namespace:kube-system,Attempt:0,} returns sandbox id \"14f081258ed66a75ed0aeeca558d5a9f3085a6d0fff7142b35b72530b3a9eed3\"" Dec 13 01:27:31.996756 containerd[1811]: time="2024-12-13T01:27:31.996716287Z" level=info msg="CreateContainer within sandbox \"14f081258ed66a75ed0aeeca558d5a9f3085a6d0fff7142b35b72530b3a9eed3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 01:27:32.019715 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1237575600.mount: Deactivated successfully. Dec 13 01:27:32.027887 containerd[1811]: time="2024-12-13T01:27:32.027834132Z" level=info msg="CreateContainer within sandbox \"14f081258ed66a75ed0aeeca558d5a9f3085a6d0fff7142b35b72530b3a9eed3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7e1e39f0dd8a17a062c9b2230201942ae001b3b46294bfcc95b1345880a6a456\"" Dec 13 01:27:32.029071 containerd[1811]: time="2024-12-13T01:27:32.028944974Z" level=info msg="StartContainer for \"7e1e39f0dd8a17a062c9b2230201942ae001b3b46294bfcc95b1345880a6a456\"" Dec 13 01:27:32.077189 containerd[1811]: time="2024-12-13T01:27:32.076491203Z" level=info msg="StartContainer for \"7e1e39f0dd8a17a062c9b2230201942ae001b3b46294bfcc95b1345880a6a456\" returns successfully" Dec 13 01:27:32.663023 kubelet[3412]: I1213 01:27:32.662410 3412 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-zqf5k" podStartSLOduration=2.662371413 podStartE2EDuration="2.662371413s" podCreationTimestamp="2024-12-13 01:27:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:27:32.661694612 +0000 UTC m=+16.231231820" watchObservedRunningTime="2024-12-13 01:27:32.662371413 +0000 UTC m=+16.231908621" Dec 13 01:27:33.495989 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3192436645.mount: Deactivated successfully. Dec 13 01:27:33.818334 containerd[1811]: time="2024-12-13T01:27:33.818212010Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:33.820020 containerd[1811]: time="2024-12-13T01:27:33.819964053Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=19125988" Dec 13 01:27:33.822220 containerd[1811]: time="2024-12-13T01:27:33.822157736Z" level=info msg="ImageCreate event name:\"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:33.826611 containerd[1811]: time="2024-12-13T01:27:33.826563502Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:33.827301 containerd[1811]: time="2024-12-13T01:27:33.827177663Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"19120155\" in 2.74922107s" Dec 13 01:27:33.827301 containerd[1811]: time="2024-12-13T01:27:33.827210023Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\"" Dec 13 01:27:33.831796 containerd[1811]: time="2024-12-13T01:27:33.831762110Z" level=info msg="CreateContainer within sandbox \"b8e1d0493b9290e096eab000e18fe92752a16ab5272c37b207c5e482aa6aded0\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 13 01:27:33.865814 containerd[1811]: time="2024-12-13T01:27:33.865761079Z" level=info msg="CreateContainer within sandbox \"b8e1d0493b9290e096eab000e18fe92752a16ab5272c37b207c5e482aa6aded0\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"eed8de0f8fa0a3b3c2e50f4ab83f6492093c583eafd72cec26fb30147a45490c\"" Dec 13 01:27:33.866777 containerd[1811]: time="2024-12-13T01:27:33.866467080Z" level=info msg="StartContainer for \"eed8de0f8fa0a3b3c2e50f4ab83f6492093c583eafd72cec26fb30147a45490c\"" Dec 13 01:27:33.913181 containerd[1811]: time="2024-12-13T01:27:33.913123148Z" level=info msg="StartContainer for \"eed8de0f8fa0a3b3c2e50f4ab83f6492093c583eafd72cec26fb30147a45490c\" returns successfully" Dec 13 01:27:34.667927 kubelet[3412]: I1213 01:27:34.667017 3412 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-c7ccbd65-f6mgv" podStartSLOduration=1.91575993 podStartE2EDuration="4.666975562s" podCreationTimestamp="2024-12-13 01:27:30 +0000 UTC" firstStartedPulling="2024-12-13 01:27:31.076226071 +0000 UTC m=+14.645763279" lastFinishedPulling="2024-12-13 01:27:33.827441703 +0000 UTC m=+17.396978911" observedRunningTime="2024-12-13 01:27:34.666934562 +0000 UTC m=+18.236471770" watchObservedRunningTime="2024-12-13 01:27:34.666975562 +0000 UTC m=+18.236512770" Dec 13 01:27:37.346423 kubelet[3412]: I1213 01:27:37.346348 3412 topology_manager.go:215] "Topology Admit Handler" podUID="dee842a0-d6db-4dfd-acd1-87c9a0350ab0" podNamespace="calico-system" podName="calico-typha-65ff5b8985-94n8f" Dec 13 01:27:37.426604 kubelet[3412]: I1213 01:27:37.426570 3412 topology_manager.go:215] "Topology Admit Handler" podUID="efb7d14c-0aa8-4a72-a426-be1bddd0d465" podNamespace="calico-system" podName="calico-node-h84g5" Dec 13 01:27:37.495628 kubelet[3412]: I1213 01:27:37.495435 3412 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdsh2\" (UniqueName: \"kubernetes.io/projected/dee842a0-d6db-4dfd-acd1-87c9a0350ab0-kube-api-access-mdsh2\") pod \"calico-typha-65ff5b8985-94n8f\" (UID: \"dee842a0-d6db-4dfd-acd1-87c9a0350ab0\") " pod="calico-system/calico-typha-65ff5b8985-94n8f" Dec 13 01:27:37.495628 kubelet[3412]: I1213 01:27:37.495496 3412 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dee842a0-d6db-4dfd-acd1-87c9a0350ab0-tigera-ca-bundle\") pod \"calico-typha-65ff5b8985-94n8f\" (UID: \"dee842a0-d6db-4dfd-acd1-87c9a0350ab0\") " pod="calico-system/calico-typha-65ff5b8985-94n8f" Dec 13 01:27:37.495628 kubelet[3412]: I1213 01:27:37.495521 3412 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/dee842a0-d6db-4dfd-acd1-87c9a0350ab0-typha-certs\") pod \"calico-typha-65ff5b8985-94n8f\" (UID: \"dee842a0-d6db-4dfd-acd1-87c9a0350ab0\") " pod="calico-system/calico-typha-65ff5b8985-94n8f" Dec 13 01:27:37.553316 kubelet[3412]: I1213 01:27:37.553109 3412 topology_manager.go:215] "Topology Admit Handler" podUID="31a49e50-08a6-4d91-bf9f-b4d8e0e1e065" podNamespace="calico-system" podName="csi-node-driver-8xjpw" Dec 13 01:27:37.553796 kubelet[3412]: E1213 01:27:37.553772 3412 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8xjpw" podUID="31a49e50-08a6-4d91-bf9f-b4d8e0e1e065" Dec 13 01:27:37.597589 kubelet[3412]: I1213 01:27:37.596942 3412 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/efb7d14c-0aa8-4a72-a426-be1bddd0d465-cni-log-dir\") pod \"calico-node-h84g5\" (UID: \"efb7d14c-0aa8-4a72-a426-be1bddd0d465\") " pod="calico-system/calico-node-h84g5" Dec 13 01:27:37.597589 kubelet[3412]: I1213 01:27:37.596979 3412 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/31a49e50-08a6-4d91-bf9f-b4d8e0e1e065-varrun\") pod \"csi-node-driver-8xjpw\" (UID: \"31a49e50-08a6-4d91-bf9f-b4d8e0e1e065\") " pod="calico-system/csi-node-driver-8xjpw" Dec 13 01:27:37.597589 kubelet[3412]: I1213 01:27:37.597004 3412 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/31a49e50-08a6-4d91-bf9f-b4d8e0e1e065-socket-dir\") pod \"csi-node-driver-8xjpw\" (UID: \"31a49e50-08a6-4d91-bf9f-b4d8e0e1e065\") " pod="calico-system/csi-node-driver-8xjpw" Dec 13 01:27:37.597589 kubelet[3412]: I1213 01:27:37.597026 3412 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/efb7d14c-0aa8-4a72-a426-be1bddd0d465-tigera-ca-bundle\") pod \"calico-node-h84g5\" (UID: \"efb7d14c-0aa8-4a72-a426-be1bddd0d465\") " pod="calico-system/calico-node-h84g5" Dec 13 01:27:37.597589 kubelet[3412]: I1213 01:27:37.597050 3412 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/efb7d14c-0aa8-4a72-a426-be1bddd0d465-policysync\") pod \"calico-node-h84g5\" (UID: \"efb7d14c-0aa8-4a72-a426-be1bddd0d465\") " pod="calico-system/calico-node-h84g5" Dec 13 01:27:37.597799 kubelet[3412]: I1213 01:27:37.597070 3412 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/efb7d14c-0aa8-4a72-a426-be1bddd0d465-var-lib-calico\") pod \"calico-node-h84g5\" (UID: \"efb7d14c-0aa8-4a72-a426-be1bddd0d465\") " pod="calico-system/calico-node-h84g5" Dec 13 01:27:37.597799 kubelet[3412]: I1213 01:27:37.597090 3412 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rfnj\" (UniqueName: \"kubernetes.io/projected/efb7d14c-0aa8-4a72-a426-be1bddd0d465-kube-api-access-4rfnj\") pod \"calico-node-h84g5\" (UID: \"efb7d14c-0aa8-4a72-a426-be1bddd0d465\") " pod="calico-system/calico-node-h84g5" Dec 13 01:27:37.599473 kubelet[3412]: I1213 01:27:37.599283 3412 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/efb7d14c-0aa8-4a72-a426-be1bddd0d465-var-run-calico\") pod \"calico-node-h84g5\" (UID: \"efb7d14c-0aa8-4a72-a426-be1bddd0d465\") " pod="calico-system/calico-node-h84g5" Dec 13 01:27:37.599473 kubelet[3412]: I1213 01:27:37.599343 3412 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/efb7d14c-0aa8-4a72-a426-be1bddd0d465-cni-bin-dir\") pod \"calico-node-h84g5\" (UID: \"efb7d14c-0aa8-4a72-a426-be1bddd0d465\") " pod="calico-system/calico-node-h84g5" Dec 13 01:27:37.599473 kubelet[3412]: I1213 01:27:37.599385 3412 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/efb7d14c-0aa8-4a72-a426-be1bddd0d465-cni-net-dir\") pod \"calico-node-h84g5\" (UID: \"efb7d14c-0aa8-4a72-a426-be1bddd0d465\") " pod="calico-system/calico-node-h84g5" Dec 13 01:27:37.601790 kubelet[3412]: I1213 01:27:37.599588 3412 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/efb7d14c-0aa8-4a72-a426-be1bddd0d465-flexvol-driver-host\") pod \"calico-node-h84g5\" (UID: \"efb7d14c-0aa8-4a72-a426-be1bddd0d465\") " pod="calico-system/calico-node-h84g5" Dec 13 01:27:37.601790 kubelet[3412]: I1213 01:27:37.599716 3412 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4v7j2\" (UniqueName: \"kubernetes.io/projected/31a49e50-08a6-4d91-bf9f-b4d8e0e1e065-kube-api-access-4v7j2\") pod \"csi-node-driver-8xjpw\" (UID: \"31a49e50-08a6-4d91-bf9f-b4d8e0e1e065\") " pod="calico-system/csi-node-driver-8xjpw" Dec 13 01:27:37.601790 kubelet[3412]: I1213 01:27:37.599744 3412 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/efb7d14c-0aa8-4a72-a426-be1bddd0d465-xtables-lock\") pod \"calico-node-h84g5\" (UID: \"efb7d14c-0aa8-4a72-a426-be1bddd0d465\") " pod="calico-system/calico-node-h84g5" Dec 13 01:27:37.601790 kubelet[3412]: I1213 01:27:37.599766 3412 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/31a49e50-08a6-4d91-bf9f-b4d8e0e1e065-registration-dir\") pod \"csi-node-driver-8xjpw\" (UID: \"31a49e50-08a6-4d91-bf9f-b4d8e0e1e065\") " pod="calico-system/csi-node-driver-8xjpw" Dec 13 01:27:37.601790 kubelet[3412]: I1213 01:27:37.599794 3412 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/efb7d14c-0aa8-4a72-a426-be1bddd0d465-lib-modules\") pod \"calico-node-h84g5\" (UID: \"efb7d14c-0aa8-4a72-a426-be1bddd0d465\") " pod="calico-system/calico-node-h84g5" Dec 13 01:27:37.602013 kubelet[3412]: I1213 01:27:37.599818 3412 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/efb7d14c-0aa8-4a72-a426-be1bddd0d465-node-certs\") pod \"calico-node-h84g5\" (UID: \"efb7d14c-0aa8-4a72-a426-be1bddd0d465\") " pod="calico-system/calico-node-h84g5" Dec 13 01:27:37.604077 kubelet[3412]: I1213 01:27:37.602335 3412 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/31a49e50-08a6-4d91-bf9f-b4d8e0e1e065-kubelet-dir\") pod \"csi-node-driver-8xjpw\" (UID: \"31a49e50-08a6-4d91-bf9f-b4d8e0e1e065\") " pod="calico-system/csi-node-driver-8xjpw" Dec 13 01:27:37.655960 containerd[1811]: time="2024-12-13T01:27:37.655916835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-65ff5b8985-94n8f,Uid:dee842a0-d6db-4dfd-acd1-87c9a0350ab0,Namespace:calico-system,Attempt:0,}" Dec 13 01:27:37.701281 containerd[1811]: time="2024-12-13T01:27:37.700346170Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:27:37.701281 containerd[1811]: time="2024-12-13T01:27:37.700765051Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:27:37.701281 containerd[1811]: time="2024-12-13T01:27:37.700778411Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:37.701281 containerd[1811]: time="2024-12-13T01:27:37.700857451Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:37.727629 kubelet[3412]: E1213 01:27:37.727581 3412 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:37.727629 kubelet[3412]: W1213 01:27:37.727615 3412 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:37.727757 kubelet[3412]: E1213 01:27:37.727639 3412 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:37.739343 kubelet[3412]: E1213 01:27:37.739267 3412 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:37.739526 kubelet[3412]: W1213 01:27:37.739506 3412 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:37.739628 kubelet[3412]: E1213 01:27:37.739616 3412 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:37.744805 kubelet[3412]: E1213 01:27:37.744780 3412 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:37.744805 kubelet[3412]: W1213 01:27:37.744799 3412 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:37.744949 kubelet[3412]: E1213 01:27:37.744820 3412 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:37.776425 containerd[1811]: time="2024-12-13T01:27:37.776205412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-65ff5b8985-94n8f,Uid:dee842a0-d6db-4dfd-acd1-87c9a0350ab0,Namespace:calico-system,Attempt:0,} returns sandbox id \"dd028b68641d35e2db09fbed54150fdbda17c38a1f52469868983d939fe11599\"" Dec 13 01:27:37.778334 containerd[1811]: time="2024-12-13T01:27:37.778302857Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Dec 13 01:27:38.035212 containerd[1811]: time="2024-12-13T01:27:38.035155927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-h84g5,Uid:efb7d14c-0aa8-4a72-a426-be1bddd0d465,Namespace:calico-system,Attempt:0,}" Dec 13 01:27:38.096199 containerd[1811]: time="2024-12-13T01:27:38.091193447Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:27:38.096199 containerd[1811]: time="2024-12-13T01:27:38.091932768Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:27:38.096199 containerd[1811]: time="2024-12-13T01:27:38.091947488Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:38.096199 containerd[1811]: time="2024-12-13T01:27:38.092122009Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:27:38.123411 containerd[1811]: time="2024-12-13T01:27:38.123365476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-h84g5,Uid:efb7d14c-0aa8-4a72-a426-be1bddd0d465,Namespace:calico-system,Attempt:0,} returns sandbox id \"3e3610026d33fe946ad03e72b17361662039886f887d8a6b39bb917a68cef736\"" Dec 13 01:27:39.057387 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount45154326.mount: Deactivated successfully. Dec 13 01:27:39.581202 kubelet[3412]: E1213 01:27:39.580779 3412 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8xjpw" podUID="31a49e50-08a6-4d91-bf9f-b4d8e0e1e065" Dec 13 01:27:39.781779 containerd[1811]: time="2024-12-13T01:27:39.781628386Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:39.784302 containerd[1811]: time="2024-12-13T01:27:39.784235152Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29231308" Dec 13 01:27:39.787141 containerd[1811]: time="2024-12-13T01:27:39.787082718Z" level=info msg="ImageCreate event name:\"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:39.793247 containerd[1811]: time="2024-12-13T01:27:39.793186051Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:39.793958 containerd[1811]: time="2024-12-13T01:27:39.793927372Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"29231162\" in 2.015451795s" Dec 13 01:27:39.794140 containerd[1811]: time="2024-12-13T01:27:39.794049893Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\"" Dec 13 01:27:39.795385 containerd[1811]: time="2024-12-13T01:27:39.794927415Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Dec 13 01:27:39.807268 containerd[1811]: time="2024-12-13T01:27:39.807233961Z" level=info msg="CreateContainer within sandbox \"dd028b68641d35e2db09fbed54150fdbda17c38a1f52469868983d939fe11599\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 13 01:27:39.837794 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3627012954.mount: Deactivated successfully. Dec 13 01:27:39.846384 containerd[1811]: time="2024-12-13T01:27:39.846345445Z" level=info msg="CreateContainer within sandbox \"dd028b68641d35e2db09fbed54150fdbda17c38a1f52469868983d939fe11599\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"895cdc3cbffcae39d46d1cab2bc151cbed3b7afc00cb74eb4406eedf9e0b7c0f\"" Dec 13 01:27:39.848190 containerd[1811]: time="2024-12-13T01:27:39.847204726Z" level=info msg="StartContainer for \"895cdc3cbffcae39d46d1cab2bc151cbed3b7afc00cb74eb4406eedf9e0b7c0f\"" Dec 13 01:27:39.907490 containerd[1811]: time="2024-12-13T01:27:39.907439175Z" level=info msg="StartContainer for \"895cdc3cbffcae39d46d1cab2bc151cbed3b7afc00cb74eb4406eedf9e0b7c0f\" returns successfully" Dec 13 01:27:40.715855 kubelet[3412]: E1213 01:27:40.715734 3412 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:40.715855 kubelet[3412]: W1213 01:27:40.715760 3412 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:40.715855 kubelet[3412]: E1213 01:27:40.715780 3412 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:40.716461 kubelet[3412]: E1213 01:27:40.716349 3412 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:40.716461 kubelet[3412]: W1213 01:27:40.716363 3412 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:40.716461 kubelet[3412]: E1213 01:27:40.716380 3412 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:40.716862 kubelet[3412]: E1213 01:27:40.716542 3412 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:40.716862 kubelet[3412]: W1213 01:27:40.716551 3412 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:40.716862 kubelet[3412]: E1213 01:27:40.716563 3412 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:40.716862 kubelet[3412]: E1213 01:27:40.716708 3412 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:40.716862 kubelet[3412]: W1213 01:27:40.716716 3412 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:40.716862 kubelet[3412]: E1213 01:27:40.716727 3412 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:40.717031 kubelet[3412]: E1213 01:27:40.717020 3412 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:40.717085 kubelet[3412]: W1213 01:27:40.717076 3412 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:40.717215 kubelet[3412]: E1213 01:27:40.717129 3412 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:40.717314 kubelet[3412]: E1213 01:27:40.717304 3412 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:40.717367 kubelet[3412]: W1213 01:27:40.717357 3412 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:40.717422 kubelet[3412]: E1213 01:27:40.717414 3412 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:40.717614 kubelet[3412]: E1213 01:27:40.717604 3412 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:40.717750 kubelet[3412]: W1213 01:27:40.717666 3412 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:40.717750 kubelet[3412]: E1213 01:27:40.717681 3412 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:40.717885 kubelet[3412]: E1213 01:27:40.717875 3412 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:40.717940 kubelet[3412]: W1213 01:27:40.717930 3412 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:40.717997 kubelet[3412]: E1213 01:27:40.717989 3412 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:40.718292 kubelet[3412]: E1213 01:27:40.718210 3412 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:40.718292 kubelet[3412]: W1213 01:27:40.718220 3412 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:40.718292 kubelet[3412]: E1213 01:27:40.718231 3412 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:40.718442 kubelet[3412]: E1213 01:27:40.718432 3412 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:40.718492 kubelet[3412]: W1213 01:27:40.718483 3412 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:40.718616 kubelet[3412]: E1213 01:27:40.718538 3412 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:40.718791 kubelet[3412]: E1213 01:27:40.718701 3412 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:40.718791 kubelet[3412]: W1213 01:27:40.718711 3412 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:40.718791 kubelet[3412]: E1213 01:27:40.718723 3412 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:40.718927 kubelet[3412]: E1213 01:27:40.718917 3412 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:40.718978 kubelet[3412]: W1213 01:27:40.718969 3412 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:40.719035 kubelet[3412]: E1213 01:27:40.719027 3412 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:40.719259 kubelet[3412]: E1213 01:27:40.719247 3412 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:40.719412 kubelet[3412]: W1213 01:27:40.719320 3412 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:40.719412 kubelet[3412]: E1213 01:27:40.719338 3412 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:40.719526 kubelet[3412]: E1213 01:27:40.719516 3412 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:40.719579 kubelet[3412]: W1213 01:27:40.719569 3412 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:40.719640 kubelet[3412]: E1213 01:27:40.719632 3412 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:40.719886 kubelet[3412]: E1213 01:27:40.719820 3412 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:40.719886 kubelet[3412]: W1213 01:27:40.719829 3412 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:40.719886 kubelet[3412]: E1213 01:27:40.719842 3412 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:40.724197 kubelet[3412]: E1213 01:27:40.724179 3412 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:40.724197 kubelet[3412]: W1213 01:27:40.724195 3412 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:40.724288 kubelet[3412]: E1213 01:27:40.724209 3412 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:40.724448 kubelet[3412]: E1213 01:27:40.724429 3412 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:40.724448 kubelet[3412]: W1213 01:27:40.724444 3412 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:40.724506 kubelet[3412]: E1213 01:27:40.724462 3412 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:40.724628 kubelet[3412]: E1213 01:27:40.724611 3412 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:40.724628 kubelet[3412]: W1213 01:27:40.724625 3412 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:40.724683 kubelet[3412]: E1213 01:27:40.724638 3412 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:40.724857 kubelet[3412]: E1213 01:27:40.724838 3412 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:40.724857 kubelet[3412]: W1213 01:27:40.724855 3412 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:40.724921 kubelet[3412]: E1213 01:27:40.724871 3412 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:40.725028 kubelet[3412]: E1213 01:27:40.725010 3412 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:40.725028 kubelet[3412]: W1213 01:27:40.725022 3412 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:40.725100 kubelet[3412]: E1213 01:27:40.725041 3412 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:40.725240 kubelet[3412]: E1213 01:27:40.725225 3412 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:40.725240 kubelet[3412]: W1213 01:27:40.725238 3412 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:40.725331 kubelet[3412]: E1213 01:27:40.725255 3412 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:40.725474 kubelet[3412]: E1213 01:27:40.725456 3412 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:40.725474 kubelet[3412]: W1213 01:27:40.725470 3412 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:40.725538 kubelet[3412]: E1213 01:27:40.725487 3412 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:40.725766 kubelet[3412]: E1213 01:27:40.725753 3412 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:40.725915 kubelet[3412]: W1213 01:27:40.725812 3412 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:40.725915 kubelet[3412]: E1213 01:27:40.725837 3412 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:40.726116 kubelet[3412]: E1213 01:27:40.726035 3412 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:40.726116 kubelet[3412]: W1213 01:27:40.726048 3412 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:40.726116 kubelet[3412]: E1213 01:27:40.726087 3412 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:40.726295 kubelet[3412]: E1213 01:27:40.726282 3412 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:40.726344 kubelet[3412]: W1213 01:27:40.726334 3412 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:40.726439 kubelet[3412]: E1213 01:27:40.726415 3412 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:40.726746 kubelet[3412]: E1213 01:27:40.726627 3412 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:40.726746 kubelet[3412]: W1213 01:27:40.726640 3412 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:40.726746 kubelet[3412]: E1213 01:27:40.726661 3412 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:40.726904 kubelet[3412]: E1213 01:27:40.726892 3412 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:40.726957 kubelet[3412]: W1213 01:27:40.726947 3412 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:40.727015 kubelet[3412]: E1213 01:27:40.727007 3412 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:40.727280 kubelet[3412]: E1213 01:27:40.727259 3412 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:40.727280 kubelet[3412]: W1213 01:27:40.727280 3412 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:40.727364 kubelet[3412]: E1213 01:27:40.727299 3412 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:40.727469 kubelet[3412]: E1213 01:27:40.727442 3412 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:40.727469 kubelet[3412]: W1213 01:27:40.727457 3412 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:40.727522 kubelet[3412]: E1213 01:27:40.727478 3412 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:40.727666 kubelet[3412]: E1213 01:27:40.727651 3412 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:40.727666 kubelet[3412]: W1213 01:27:40.727664 3412 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:40.727720 kubelet[3412]: E1213 01:27:40.727678 3412 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:40.727986 kubelet[3412]: E1213 01:27:40.727969 3412 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:40.727986 kubelet[3412]: W1213 01:27:40.727984 3412 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:40.728054 kubelet[3412]: E1213 01:27:40.728001 3412 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:40.728203 kubelet[3412]: E1213 01:27:40.728175 3412 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:40.728203 kubelet[3412]: W1213 01:27:40.728201 3412 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:40.728272 kubelet[3412]: E1213 01:27:40.728213 3412 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:40.728626 kubelet[3412]: E1213 01:27:40.728610 3412 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:27:40.728626 kubelet[3412]: W1213 01:27:40.728624 3412 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:27:40.728687 kubelet[3412]: E1213 01:27:40.728637 3412 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:27:41.120540 containerd[1811]: time="2024-12-13T01:27:41.120497693Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:41.123281 containerd[1811]: time="2024-12-13T01:27:41.123252979Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5117811" Dec 13 01:27:41.126136 containerd[1811]: time="2024-12-13T01:27:41.126109265Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:41.131214 containerd[1811]: time="2024-12-13T01:27:41.131143875Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:41.131937 containerd[1811]: time="2024-12-13T01:27:41.131779517Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 1.336817142s" Dec 13 01:27:41.131937 containerd[1811]: time="2024-12-13T01:27:41.131816797Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Dec 13 01:27:41.134727 containerd[1811]: time="2024-12-13T01:27:41.134619003Z" level=info msg="CreateContainer within sandbox \"3e3610026d33fe946ad03e72b17361662039886f887d8a6b39bb917a68cef736\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 13 01:27:41.173705 containerd[1811]: time="2024-12-13T01:27:41.173642726Z" level=info msg="CreateContainer within sandbox \"3e3610026d33fe946ad03e72b17361662039886f887d8a6b39bb917a68cef736\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"fcc27a45dd5b140837541a8ba34757baebdca9aefe5c6be4bd90905c0bd67a06\"" Dec 13 01:27:41.175380 containerd[1811]: time="2024-12-13T01:27:41.175341970Z" level=info msg="StartContainer for \"fcc27a45dd5b140837541a8ba34757baebdca9aefe5c6be4bd90905c0bd67a06\"" Dec 13 01:27:41.225530 systemd[1]: run-containerd-runc-k8s.io-fcc27a45dd5b140837541a8ba34757baebdca9aefe5c6be4bd90905c0bd67a06-runc.G3KEFl.mount: Deactivated successfully. Dec 13 01:27:41.275050 containerd[1811]: time="2024-12-13T01:27:41.273841701Z" level=info msg="StartContainer for \"fcc27a45dd5b140837541a8ba34757baebdca9aefe5c6be4bd90905c0bd67a06\" returns successfully" Dec 13 01:27:41.580883 kubelet[3412]: E1213 01:27:41.580842 3412 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8xjpw" podUID="31a49e50-08a6-4d91-bf9f-b4d8e0e1e065" Dec 13 01:27:42.214785 kubelet[3412]: I1213 01:27:41.678013 3412 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:27:42.214785 kubelet[3412]: I1213 01:27:41.696479 3412 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-65ff5b8985-94n8f" podStartSLOduration=2.679544768 podStartE2EDuration="4.696271125s" podCreationTimestamp="2024-12-13 01:27:37 +0000 UTC" firstStartedPulling="2024-12-13 01:27:37.777702616 +0000 UTC m=+21.347239824" lastFinishedPulling="2024-12-13 01:27:39.794429013 +0000 UTC m=+23.363966181" observedRunningTime="2024-12-13 01:27:40.695612783 +0000 UTC m=+24.265149991" watchObservedRunningTime="2024-12-13 01:27:41.696271125 +0000 UTC m=+25.265808293" Dec 13 01:27:41.800669 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fcc27a45dd5b140837541a8ba34757baebdca9aefe5c6be4bd90905c0bd67a06-rootfs.mount: Deactivated successfully. Dec 13 01:27:42.237059 containerd[1811]: time="2024-12-13T01:27:42.236843883Z" level=info msg="shim disconnected" id=fcc27a45dd5b140837541a8ba34757baebdca9aefe5c6be4bd90905c0bd67a06 namespace=k8s.io Dec 13 01:27:42.237059 containerd[1811]: time="2024-12-13T01:27:42.236896923Z" level=warning msg="cleaning up after shim disconnected" id=fcc27a45dd5b140837541a8ba34757baebdca9aefe5c6be4bd90905c0bd67a06 namespace=k8s.io Dec 13 01:27:42.237059 containerd[1811]: time="2024-12-13T01:27:42.236905083Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:27:42.682471 containerd[1811]: time="2024-12-13T01:27:42.682428477Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Dec 13 01:27:43.580430 kubelet[3412]: E1213 01:27:43.580362 3412 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8xjpw" podUID="31a49e50-08a6-4d91-bf9f-b4d8e0e1e065" Dec 13 01:27:45.580816 kubelet[3412]: E1213 01:27:45.580776 3412 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8xjpw" podUID="31a49e50-08a6-4d91-bf9f-b4d8e0e1e065" Dec 13 01:27:46.788440 containerd[1811]: time="2024-12-13T01:27:46.788363748Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:46.790307 containerd[1811]: time="2024-12-13T01:27:46.790271830Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Dec 13 01:27:46.793716 containerd[1811]: time="2024-12-13T01:27:46.793661195Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:46.797105 containerd[1811]: time="2024-12-13T01:27:46.797073720Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:46.797954 containerd[1811]: time="2024-12-13T01:27:46.797837882Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 4.115198005s" Dec 13 01:27:46.797954 containerd[1811]: time="2024-12-13T01:27:46.797870082Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Dec 13 01:27:46.801057 containerd[1811]: time="2024-12-13T01:27:46.801018806Z" level=info msg="CreateContainer within sandbox \"3e3610026d33fe946ad03e72b17361662039886f887d8a6b39bb917a68cef736\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 01:27:46.835625 containerd[1811]: time="2024-12-13T01:27:46.835570377Z" level=info msg="CreateContainer within sandbox \"3e3610026d33fe946ad03e72b17361662039886f887d8a6b39bb917a68cef736\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"158d4314c5d73cf6b243afd61b1e1c7e715ab9a6a4a2a94927953f589974821f\"" Dec 13 01:27:46.836583 containerd[1811]: time="2024-12-13T01:27:46.836343379Z" level=info msg="StartContainer for \"158d4314c5d73cf6b243afd61b1e1c7e715ab9a6a4a2a94927953f589974821f\"" Dec 13 01:27:46.891632 containerd[1811]: time="2024-12-13T01:27:46.891497060Z" level=info msg="StartContainer for \"158d4314c5d73cf6b243afd61b1e1c7e715ab9a6a4a2a94927953f589974821f\" returns successfully" Dec 13 01:27:47.581147 kubelet[3412]: E1213 01:27:47.580812 3412 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8xjpw" podUID="31a49e50-08a6-4d91-bf9f-b4d8e0e1e065" Dec 13 01:27:47.986517 containerd[1811]: time="2024-12-13T01:27:47.986396719Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:27:47.999640 kubelet[3412]: I1213 01:27:47.999608 3412 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 01:27:48.015135 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-158d4314c5d73cf6b243afd61b1e1c7e715ab9a6a4a2a94927953f589974821f-rootfs.mount: Deactivated successfully. Dec 13 01:27:48.049519 kubelet[3412]: I1213 01:27:48.049480 3412 topology_manager.go:215] "Topology Admit Handler" podUID="a1d94ea8-bc82-4759-bbe6-96e9a3a3933f" podNamespace="kube-system" podName="coredns-76f75df574-7hccp" Dec 13 01:27:48.057521 kubelet[3412]: I1213 01:27:48.057458 3412 topology_manager.go:215] "Topology Admit Handler" podUID="615b4d62-a251-4f67-a6ae-4331125f9266" podNamespace="kube-system" podName="coredns-76f75df574-h7l4p" Dec 13 01:27:48.066607 kubelet[3412]: I1213 01:27:48.064461 3412 topology_manager.go:215] "Topology Admit Handler" podUID="630b6a63-ec70-4f62-be70-0062014125b5" podNamespace="calico-apiserver" podName="calico-apiserver-8659b5f7c7-t7g7q" Dec 13 01:27:48.066607 kubelet[3412]: I1213 01:27:48.066212 3412 topology_manager.go:215] "Topology Admit Handler" podUID="c28ab2d8-49cf-45a5-b55c-a48ac6236be7" podNamespace="calico-apiserver" podName="calico-apiserver-8659b5f7c7-mrdjt" Dec 13 01:27:48.069189 kubelet[3412]: I1213 01:27:48.067955 3412 topology_manager.go:215] "Topology Admit Handler" podUID="56a8139e-d217-4073-b397-5d26c40f4540" podNamespace="calico-system" podName="calico-kube-controllers-64b868bd8-nmv69" Dec 13 01:27:48.076691 kubelet[3412]: I1213 01:27:48.076664 3412 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trq25\" (UniqueName: \"kubernetes.io/projected/615b4d62-a251-4f67-a6ae-4331125f9266-kube-api-access-trq25\") pod \"coredns-76f75df574-h7l4p\" (UID: \"615b4d62-a251-4f67-a6ae-4331125f9266\") " pod="kube-system/coredns-76f75df574-h7l4p" Dec 13 01:27:48.083758 kubelet[3412]: I1213 01:27:48.081699 3412 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jl24f\" (UniqueName: \"kubernetes.io/projected/630b6a63-ec70-4f62-be70-0062014125b5-kube-api-access-jl24f\") pod \"calico-apiserver-8659b5f7c7-t7g7q\" (UID: \"630b6a63-ec70-4f62-be70-0062014125b5\") " pod="calico-apiserver/calico-apiserver-8659b5f7c7-t7g7q" Dec 13 01:27:48.083758 kubelet[3412]: I1213 01:27:48.081844 3412 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2b2hd\" (UniqueName: \"kubernetes.io/projected/c28ab2d8-49cf-45a5-b55c-a48ac6236be7-kube-api-access-2b2hd\") pod \"calico-apiserver-8659b5f7c7-mrdjt\" (UID: \"c28ab2d8-49cf-45a5-b55c-a48ac6236be7\") " pod="calico-apiserver/calico-apiserver-8659b5f7c7-mrdjt" Dec 13 01:27:48.083758 kubelet[3412]: I1213 01:27:48.082176 3412 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/615b4d62-a251-4f67-a6ae-4331125f9266-config-volume\") pod \"coredns-76f75df574-h7l4p\" (UID: \"615b4d62-a251-4f67-a6ae-4331125f9266\") " pod="kube-system/coredns-76f75df574-h7l4p" Dec 13 01:27:48.083758 kubelet[3412]: I1213 01:27:48.082219 3412 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lx7fh\" (UniqueName: \"kubernetes.io/projected/a1d94ea8-bc82-4759-bbe6-96e9a3a3933f-kube-api-access-lx7fh\") pod \"coredns-76f75df574-7hccp\" (UID: \"a1d94ea8-bc82-4759-bbe6-96e9a3a3933f\") " pod="kube-system/coredns-76f75df574-7hccp" Dec 13 01:27:48.083758 kubelet[3412]: I1213 01:27:48.082329 3412 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56a8139e-d217-4073-b397-5d26c40f4540-tigera-ca-bundle\") pod \"calico-kube-controllers-64b868bd8-nmv69\" (UID: \"56a8139e-d217-4073-b397-5d26c40f4540\") " pod="calico-system/calico-kube-controllers-64b868bd8-nmv69" Dec 13 01:27:48.083974 kubelet[3412]: I1213 01:27:48.082357 3412 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vt4mc\" (UniqueName: \"kubernetes.io/projected/56a8139e-d217-4073-b397-5d26c40f4540-kube-api-access-vt4mc\") pod \"calico-kube-controllers-64b868bd8-nmv69\" (UID: \"56a8139e-d217-4073-b397-5d26c40f4540\") " pod="calico-system/calico-kube-controllers-64b868bd8-nmv69" Dec 13 01:27:48.083974 kubelet[3412]: I1213 01:27:48.082386 3412 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/630b6a63-ec70-4f62-be70-0062014125b5-calico-apiserver-certs\") pod \"calico-apiserver-8659b5f7c7-t7g7q\" (UID: \"630b6a63-ec70-4f62-be70-0062014125b5\") " pod="calico-apiserver/calico-apiserver-8659b5f7c7-t7g7q" Dec 13 01:27:48.083974 kubelet[3412]: I1213 01:27:48.083819 3412 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a1d94ea8-bc82-4759-bbe6-96e9a3a3933f-config-volume\") pod \"coredns-76f75df574-7hccp\" (UID: \"a1d94ea8-bc82-4759-bbe6-96e9a3a3933f\") " pod="kube-system/coredns-76f75df574-7hccp" Dec 13 01:27:48.084357 kubelet[3412]: I1213 01:27:48.083851 3412 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c28ab2d8-49cf-45a5-b55c-a48ac6236be7-calico-apiserver-certs\") pod \"calico-apiserver-8659b5f7c7-mrdjt\" (UID: \"c28ab2d8-49cf-45a5-b55c-a48ac6236be7\") " pod="calico-apiserver/calico-apiserver-8659b5f7c7-mrdjt" Dec 13 01:27:48.353695 containerd[1811]: time="2024-12-13T01:27:48.353653861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-7hccp,Uid:a1d94ea8-bc82-4759-bbe6-96e9a3a3933f,Namespace:kube-system,Attempt:0,}" Dec 13 01:27:48.374758 containerd[1811]: time="2024-12-13T01:27:48.374497292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-h7l4p,Uid:615b4d62-a251-4f67-a6ae-4331125f9266,Namespace:kube-system,Attempt:0,}" Dec 13 01:27:48.376660 containerd[1811]: time="2024-12-13T01:27:48.376625415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8659b5f7c7-t7g7q,Uid:630b6a63-ec70-4f62-be70-0062014125b5,Namespace:calico-apiserver,Attempt:0,}" Dec 13 01:27:48.378556 containerd[1811]: time="2024-12-13T01:27:48.378524138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8659b5f7c7-mrdjt,Uid:c28ab2d8-49cf-45a5-b55c-a48ac6236be7,Namespace:calico-apiserver,Attempt:0,}" Dec 13 01:27:48.385493 containerd[1811]: time="2024-12-13T01:27:48.385434028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64b868bd8-nmv69,Uid:56a8139e-d217-4073-b397-5d26c40f4540,Namespace:calico-system,Attempt:0,}" Dec 13 01:27:49.217040 containerd[1811]: time="2024-12-13T01:27:49.216977698Z" level=info msg="shim disconnected" id=158d4314c5d73cf6b243afd61b1e1c7e715ab9a6a4a2a94927953f589974821f namespace=k8s.io Dec 13 01:27:49.217040 containerd[1811]: time="2024-12-13T01:27:49.217036338Z" level=warning msg="cleaning up after shim disconnected" id=158d4314c5d73cf6b243afd61b1e1c7e715ab9a6a4a2a94927953f589974821f namespace=k8s.io Dec 13 01:27:49.217040 containerd[1811]: time="2024-12-13T01:27:49.217044498Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:27:49.427521 containerd[1811]: time="2024-12-13T01:27:49.427080448Z" level=error msg="Failed to destroy network for sandbox \"165b55e6a7a58a195b5814a22ddbbb477c844710b5b2d51b438e7992e1e69372\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:49.428215 containerd[1811]: time="2024-12-13T01:27:49.428067330Z" level=error msg="encountered an error cleaning up failed sandbox \"165b55e6a7a58a195b5814a22ddbbb477c844710b5b2d51b438e7992e1e69372\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:49.428474 containerd[1811]: time="2024-12-13T01:27:49.428388490Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-h7l4p,Uid:615b4d62-a251-4f67-a6ae-4331125f9266,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"165b55e6a7a58a195b5814a22ddbbb477c844710b5b2d51b438e7992e1e69372\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:49.429056 kubelet[3412]: E1213 01:27:49.428999 3412 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"165b55e6a7a58a195b5814a22ddbbb477c844710b5b2d51b438e7992e1e69372\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:49.430064 kubelet[3412]: E1213 01:27:49.429072 3412 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"165b55e6a7a58a195b5814a22ddbbb477c844710b5b2d51b438e7992e1e69372\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-h7l4p" Dec 13 01:27:49.430064 kubelet[3412]: E1213 01:27:49.429097 3412 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"165b55e6a7a58a195b5814a22ddbbb477c844710b5b2d51b438e7992e1e69372\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-h7l4p" Dec 13 01:27:49.430064 kubelet[3412]: E1213 01:27:49.429177 3412 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-h7l4p_kube-system(615b4d62-a251-4f67-a6ae-4331125f9266)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-h7l4p_kube-system(615b4d62-a251-4f67-a6ae-4331125f9266)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"165b55e6a7a58a195b5814a22ddbbb477c844710b5b2d51b438e7992e1e69372\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-h7l4p" podUID="615b4d62-a251-4f67-a6ae-4331125f9266" Dec 13 01:27:49.459420 containerd[1811]: time="2024-12-13T01:27:49.459265256Z" level=error msg="Failed to destroy network for sandbox \"c27a2aa3797a49e98e5551810ad3c42db19c64ec063b00abe0c705b796e9c55e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:49.459849 containerd[1811]: time="2024-12-13T01:27:49.459809697Z" level=error msg="encountered an error cleaning up failed sandbox \"c27a2aa3797a49e98e5551810ad3c42db19c64ec063b00abe0c705b796e9c55e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:49.459973 containerd[1811]: time="2024-12-13T01:27:49.459951657Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8659b5f7c7-mrdjt,Uid:c28ab2d8-49cf-45a5-b55c-a48ac6236be7,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c27a2aa3797a49e98e5551810ad3c42db19c64ec063b00abe0c705b796e9c55e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:49.460321 kubelet[3412]: E1213 01:27:49.460274 3412 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c27a2aa3797a49e98e5551810ad3c42db19c64ec063b00abe0c705b796e9c55e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:49.460504 kubelet[3412]: E1213 01:27:49.460329 3412 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c27a2aa3797a49e98e5551810ad3c42db19c64ec063b00abe0c705b796e9c55e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8659b5f7c7-mrdjt" Dec 13 01:27:49.460504 kubelet[3412]: E1213 01:27:49.460348 3412 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c27a2aa3797a49e98e5551810ad3c42db19c64ec063b00abe0c705b796e9c55e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8659b5f7c7-mrdjt" Dec 13 01:27:49.460504 kubelet[3412]: E1213 01:27:49.460401 3412 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8659b5f7c7-mrdjt_calico-apiserver(c28ab2d8-49cf-45a5-b55c-a48ac6236be7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8659b5f7c7-mrdjt_calico-apiserver(c28ab2d8-49cf-45a5-b55c-a48ac6236be7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c27a2aa3797a49e98e5551810ad3c42db19c64ec063b00abe0c705b796e9c55e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8659b5f7c7-mrdjt" podUID="c28ab2d8-49cf-45a5-b55c-a48ac6236be7" Dec 13 01:27:49.475854 containerd[1811]: time="2024-12-13T01:27:49.475715880Z" level=error msg="Failed to destroy network for sandbox \"d1236eb14042678ee890b48f5cab70f77b973ea777fa0a3e26de6a1f9fb4a5c2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:49.477236 containerd[1811]: time="2024-12-13T01:27:49.477009682Z" level=error msg="encountered an error cleaning up failed sandbox \"d1236eb14042678ee890b48f5cab70f77b973ea777fa0a3e26de6a1f9fb4a5c2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:49.477236 containerd[1811]: time="2024-12-13T01:27:49.477082482Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-7hccp,Uid:a1d94ea8-bc82-4759-bbe6-96e9a3a3933f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d1236eb14042678ee890b48f5cab70f77b973ea777fa0a3e26de6a1f9fb4a5c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:49.477373 kubelet[3412]: E1213 01:27:49.477316 3412 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1236eb14042678ee890b48f5cab70f77b973ea777fa0a3e26de6a1f9fb4a5c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:49.477373 kubelet[3412]: E1213 01:27:49.477370 3412 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1236eb14042678ee890b48f5cab70f77b973ea777fa0a3e26de6a1f9fb4a5c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-7hccp" Dec 13 01:27:49.477558 kubelet[3412]: E1213 01:27:49.477389 3412 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1236eb14042678ee890b48f5cab70f77b973ea777fa0a3e26de6a1f9fb4a5c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-7hccp" Dec 13 01:27:49.477558 kubelet[3412]: E1213 01:27:49.477436 3412 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-7hccp_kube-system(a1d94ea8-bc82-4759-bbe6-96e9a3a3933f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-7hccp_kube-system(a1d94ea8-bc82-4759-bbe6-96e9a3a3933f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d1236eb14042678ee890b48f5cab70f77b973ea777fa0a3e26de6a1f9fb4a5c2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-7hccp" podUID="a1d94ea8-bc82-4759-bbe6-96e9a3a3933f" Dec 13 01:27:49.483818 containerd[1811]: time="2024-12-13T01:27:49.483771172Z" level=error msg="Failed to destroy network for sandbox \"ff36528399291c8c0fd75688fcbf6a6ab9e4d4be343ecf7d46bec379d5474ede\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:49.484735 containerd[1811]: time="2024-12-13T01:27:49.484611013Z" level=error msg="encountered an error cleaning up failed sandbox \"ff36528399291c8c0fd75688fcbf6a6ab9e4d4be343ecf7d46bec379d5474ede\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:49.484735 containerd[1811]: time="2024-12-13T01:27:49.484682333Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8659b5f7c7-t7g7q,Uid:630b6a63-ec70-4f62-be70-0062014125b5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ff36528399291c8c0fd75688fcbf6a6ab9e4d4be343ecf7d46bec379d5474ede\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:49.485128 kubelet[3412]: E1213 01:27:49.485098 3412 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff36528399291c8c0fd75688fcbf6a6ab9e4d4be343ecf7d46bec379d5474ede\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:49.485242 kubelet[3412]: E1213 01:27:49.485152 3412 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff36528399291c8c0fd75688fcbf6a6ab9e4d4be343ecf7d46bec379d5474ede\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8659b5f7c7-t7g7q" Dec 13 01:27:49.485242 kubelet[3412]: E1213 01:27:49.485196 3412 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff36528399291c8c0fd75688fcbf6a6ab9e4d4be343ecf7d46bec379d5474ede\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8659b5f7c7-t7g7q" Dec 13 01:27:49.485308 kubelet[3412]: E1213 01:27:49.485245 3412 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8659b5f7c7-t7g7q_calico-apiserver(630b6a63-ec70-4f62-be70-0062014125b5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8659b5f7c7-t7g7q_calico-apiserver(630b6a63-ec70-4f62-be70-0062014125b5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ff36528399291c8c0fd75688fcbf6a6ab9e4d4be343ecf7d46bec379d5474ede\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8659b5f7c7-t7g7q" podUID="630b6a63-ec70-4f62-be70-0062014125b5" Dec 13 01:27:49.491813 containerd[1811]: time="2024-12-13T01:27:49.491710224Z" level=error msg="Failed to destroy network for sandbox \"fc5fef736f525172af5a285d617822b0d39d8c6a76efa09d2182ea831f81aaa8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:49.492071 containerd[1811]: time="2024-12-13T01:27:49.492034264Z" level=error msg="encountered an error cleaning up failed sandbox \"fc5fef736f525172af5a285d617822b0d39d8c6a76efa09d2182ea831f81aaa8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:49.492109 containerd[1811]: time="2024-12-13T01:27:49.492090464Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64b868bd8-nmv69,Uid:56a8139e-d217-4073-b397-5d26c40f4540,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fc5fef736f525172af5a285d617822b0d39d8c6a76efa09d2182ea831f81aaa8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:49.492367 kubelet[3412]: E1213 01:27:49.492335 3412 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc5fef736f525172af5a285d617822b0d39d8c6a76efa09d2182ea831f81aaa8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:49.492423 kubelet[3412]: E1213 01:27:49.492387 3412 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc5fef736f525172af5a285d617822b0d39d8c6a76efa09d2182ea831f81aaa8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-64b868bd8-nmv69" Dec 13 01:27:49.492423 kubelet[3412]: E1213 01:27:49.492407 3412 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc5fef736f525172af5a285d617822b0d39d8c6a76efa09d2182ea831f81aaa8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-64b868bd8-nmv69" Dec 13 01:27:49.492474 kubelet[3412]: E1213 01:27:49.492465 3412 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-64b868bd8-nmv69_calico-system(56a8139e-d217-4073-b397-5d26c40f4540)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-64b868bd8-nmv69_calico-system(56a8139e-d217-4073-b397-5d26c40f4540)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fc5fef736f525172af5a285d617822b0d39d8c6a76efa09d2182ea831f81aaa8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-64b868bd8-nmv69" podUID="56a8139e-d217-4073-b397-5d26c40f4540" Dec 13 01:27:49.583572 containerd[1811]: time="2024-12-13T01:27:49.583383879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8xjpw,Uid:31a49e50-08a6-4d91-bf9f-b4d8e0e1e065,Namespace:calico-system,Attempt:0,}" Dec 13 01:27:49.654895 containerd[1811]: time="2024-12-13T01:27:49.654826945Z" level=error msg="Failed to destroy network for sandbox \"b341f141d5fd5be20330ae636a63a269b1fb69a554b6a3f4bfe12feb6af4e806\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:49.655229 containerd[1811]: time="2024-12-13T01:27:49.655176545Z" level=error msg="encountered an error cleaning up failed sandbox \"b341f141d5fd5be20330ae636a63a269b1fb69a554b6a3f4bfe12feb6af4e806\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:49.655287 containerd[1811]: time="2024-12-13T01:27:49.655264145Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8xjpw,Uid:31a49e50-08a6-4d91-bf9f-b4d8e0e1e065,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b341f141d5fd5be20330ae636a63a269b1fb69a554b6a3f4bfe12feb6af4e806\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:49.655591 kubelet[3412]: E1213 01:27:49.655496 3412 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b341f141d5fd5be20330ae636a63a269b1fb69a554b6a3f4bfe12feb6af4e806\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:49.655591 kubelet[3412]: E1213 01:27:49.655547 3412 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b341f141d5fd5be20330ae636a63a269b1fb69a554b6a3f4bfe12feb6af4e806\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8xjpw" Dec 13 01:27:49.655591 kubelet[3412]: E1213 01:27:49.655567 3412 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b341f141d5fd5be20330ae636a63a269b1fb69a554b6a3f4bfe12feb6af4e806\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8xjpw" Dec 13 01:27:49.655774 kubelet[3412]: E1213 01:27:49.655747 3412 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-8xjpw_calico-system(31a49e50-08a6-4d91-bf9f-b4d8e0e1e065)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-8xjpw_calico-system(31a49e50-08a6-4d91-bf9f-b4d8e0e1e065)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b341f141d5fd5be20330ae636a63a269b1fb69a554b6a3f4bfe12feb6af4e806\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-8xjpw" podUID="31a49e50-08a6-4d91-bf9f-b4d8e0e1e065" Dec 13 01:27:49.697207 kubelet[3412]: I1213 01:27:49.697152 3412 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c27a2aa3797a49e98e5551810ad3c42db19c64ec063b00abe0c705b796e9c55e" Dec 13 01:27:49.699091 containerd[1811]: time="2024-12-13T01:27:49.698855370Z" level=info msg="StopPodSandbox for \"c27a2aa3797a49e98e5551810ad3c42db19c64ec063b00abe0c705b796e9c55e\"" Dec 13 01:27:49.699914 kubelet[3412]: I1213 01:27:49.699887 3412 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="165b55e6a7a58a195b5814a22ddbbb477c844710b5b2d51b438e7992e1e69372" Dec 13 01:27:49.700572 containerd[1811]: time="2024-12-13T01:27:49.699844291Z" level=info msg="Ensure that sandbox c27a2aa3797a49e98e5551810ad3c42db19c64ec063b00abe0c705b796e9c55e in task-service has been cleanup successfully" Dec 13 01:27:49.700572 containerd[1811]: time="2024-12-13T01:27:49.700362972Z" level=info msg="StopPodSandbox for \"165b55e6a7a58a195b5814a22ddbbb477c844710b5b2d51b438e7992e1e69372\"" Dec 13 01:27:49.702048 containerd[1811]: time="2024-12-13T01:27:49.700884693Z" level=info msg="Ensure that sandbox 165b55e6a7a58a195b5814a22ddbbb477c844710b5b2d51b438e7992e1e69372 in task-service has been cleanup successfully" Dec 13 01:27:49.703787 kubelet[3412]: I1213 01:27:49.703764 3412 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b341f141d5fd5be20330ae636a63a269b1fb69a554b6a3f4bfe12feb6af4e806" Dec 13 01:27:49.705906 containerd[1811]: time="2024-12-13T01:27:49.705865060Z" level=info msg="StopPodSandbox for \"b341f141d5fd5be20330ae636a63a269b1fb69a554b6a3f4bfe12feb6af4e806\"" Dec 13 01:27:49.706067 containerd[1811]: time="2024-12-13T01:27:49.706039941Z" level=info msg="Ensure that sandbox b341f141d5fd5be20330ae636a63a269b1fb69a554b6a3f4bfe12feb6af4e806 in task-service has been cleanup successfully" Dec 13 01:27:49.714700 containerd[1811]: time="2024-12-13T01:27:49.714496593Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Dec 13 01:27:49.724627 kubelet[3412]: I1213 01:27:49.724597 3412 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1236eb14042678ee890b48f5cab70f77b973ea777fa0a3e26de6a1f9fb4a5c2" Dec 13 01:27:49.726690 containerd[1811]: time="2024-12-13T01:27:49.725570609Z" level=info msg="StopPodSandbox for \"d1236eb14042678ee890b48f5cab70f77b973ea777fa0a3e26de6a1f9fb4a5c2\"" Dec 13 01:27:49.726690 containerd[1811]: time="2024-12-13T01:27:49.726005210Z" level=info msg="Ensure that sandbox d1236eb14042678ee890b48f5cab70f77b973ea777fa0a3e26de6a1f9fb4a5c2 in task-service has been cleanup successfully" Dec 13 01:27:49.735116 kubelet[3412]: I1213 01:27:49.733544 3412 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff36528399291c8c0fd75688fcbf6a6ab9e4d4be343ecf7d46bec379d5474ede" Dec 13 01:27:49.736293 containerd[1811]: time="2024-12-13T01:27:49.736253025Z" level=info msg="StopPodSandbox for \"ff36528399291c8c0fd75688fcbf6a6ab9e4d4be343ecf7d46bec379d5474ede\"" Dec 13 01:27:49.736446 containerd[1811]: time="2024-12-13T01:27:49.736419665Z" level=info msg="Ensure that sandbox ff36528399291c8c0fd75688fcbf6a6ab9e4d4be343ecf7d46bec379d5474ede in task-service has been cleanup successfully" Dec 13 01:27:49.749376 kubelet[3412]: I1213 01:27:49.749077 3412 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc5fef736f525172af5a285d617822b0d39d8c6a76efa09d2182ea831f81aaa8" Dec 13 01:27:49.755052 containerd[1811]: time="2024-12-13T01:27:49.754097452Z" level=info msg="StopPodSandbox for \"fc5fef736f525172af5a285d617822b0d39d8c6a76efa09d2182ea831f81aaa8\"" Dec 13 01:27:49.755052 containerd[1811]: time="2024-12-13T01:27:49.754306612Z" level=info msg="Ensure that sandbox fc5fef736f525172af5a285d617822b0d39d8c6a76efa09d2182ea831f81aaa8 in task-service has been cleanup successfully" Dec 13 01:27:49.784767 containerd[1811]: time="2024-12-13T01:27:49.784714297Z" level=error msg="StopPodSandbox for \"165b55e6a7a58a195b5814a22ddbbb477c844710b5b2d51b438e7992e1e69372\" failed" error="failed to destroy network for sandbox \"165b55e6a7a58a195b5814a22ddbbb477c844710b5b2d51b438e7992e1e69372\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:49.785338 kubelet[3412]: E1213 01:27:49.785304 3412 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"165b55e6a7a58a195b5814a22ddbbb477c844710b5b2d51b438e7992e1e69372\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="165b55e6a7a58a195b5814a22ddbbb477c844710b5b2d51b438e7992e1e69372" Dec 13 01:27:49.785430 kubelet[3412]: E1213 01:27:49.785377 3412 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"165b55e6a7a58a195b5814a22ddbbb477c844710b5b2d51b438e7992e1e69372"} Dec 13 01:27:49.785430 kubelet[3412]: E1213 01:27:49.785414 3412 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"615b4d62-a251-4f67-a6ae-4331125f9266\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"165b55e6a7a58a195b5814a22ddbbb477c844710b5b2d51b438e7992e1e69372\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:27:49.785519 kubelet[3412]: E1213 01:27:49.785446 3412 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"615b4d62-a251-4f67-a6ae-4331125f9266\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"165b55e6a7a58a195b5814a22ddbbb477c844710b5b2d51b438e7992e1e69372\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-h7l4p" podUID="615b4d62-a251-4f67-a6ae-4331125f9266" Dec 13 01:27:49.795499 containerd[1811]: time="2024-12-13T01:27:49.795429993Z" level=error msg="StopPodSandbox for \"c27a2aa3797a49e98e5551810ad3c42db19c64ec063b00abe0c705b796e9c55e\" failed" error="failed to destroy network for sandbox \"c27a2aa3797a49e98e5551810ad3c42db19c64ec063b00abe0c705b796e9c55e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:49.798526 kubelet[3412]: E1213 01:27:49.798490 3412 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c27a2aa3797a49e98e5551810ad3c42db19c64ec063b00abe0c705b796e9c55e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c27a2aa3797a49e98e5551810ad3c42db19c64ec063b00abe0c705b796e9c55e" Dec 13 01:27:49.798654 kubelet[3412]: E1213 01:27:49.798537 3412 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c27a2aa3797a49e98e5551810ad3c42db19c64ec063b00abe0c705b796e9c55e"} Dec 13 01:27:49.798654 kubelet[3412]: E1213 01:27:49.798582 3412 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c28ab2d8-49cf-45a5-b55c-a48ac6236be7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c27a2aa3797a49e98e5551810ad3c42db19c64ec063b00abe0c705b796e9c55e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:27:49.798654 kubelet[3412]: E1213 01:27:49.798613 3412 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c28ab2d8-49cf-45a5-b55c-a48ac6236be7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c27a2aa3797a49e98e5551810ad3c42db19c64ec063b00abe0c705b796e9c55e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8659b5f7c7-mrdjt" podUID="c28ab2d8-49cf-45a5-b55c-a48ac6236be7" Dec 13 01:27:49.809849 containerd[1811]: time="2024-12-13T01:27:49.809801534Z" level=error msg="StopPodSandbox for \"b341f141d5fd5be20330ae636a63a269b1fb69a554b6a3f4bfe12feb6af4e806\" failed" error="failed to destroy network for sandbox \"b341f141d5fd5be20330ae636a63a269b1fb69a554b6a3f4bfe12feb6af4e806\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:49.811851 kubelet[3412]: E1213 01:27:49.811025 3412 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b341f141d5fd5be20330ae636a63a269b1fb69a554b6a3f4bfe12feb6af4e806\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b341f141d5fd5be20330ae636a63a269b1fb69a554b6a3f4bfe12feb6af4e806" Dec 13 01:27:49.811851 kubelet[3412]: E1213 01:27:49.811078 3412 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b341f141d5fd5be20330ae636a63a269b1fb69a554b6a3f4bfe12feb6af4e806"} Dec 13 01:27:49.811851 kubelet[3412]: E1213 01:27:49.811112 3412 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"31a49e50-08a6-4d91-bf9f-b4d8e0e1e065\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b341f141d5fd5be20330ae636a63a269b1fb69a554b6a3f4bfe12feb6af4e806\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:27:49.811851 kubelet[3412]: E1213 01:27:49.811140 3412 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"31a49e50-08a6-4d91-bf9f-b4d8e0e1e065\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b341f141d5fd5be20330ae636a63a269b1fb69a554b6a3f4bfe12feb6af4e806\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-8xjpw" podUID="31a49e50-08a6-4d91-bf9f-b4d8e0e1e065" Dec 13 01:27:49.816249 containerd[1811]: time="2024-12-13T01:27:49.816208863Z" level=error msg="StopPodSandbox for \"d1236eb14042678ee890b48f5cab70f77b973ea777fa0a3e26de6a1f9fb4a5c2\" failed" error="failed to destroy network for sandbox \"d1236eb14042678ee890b48f5cab70f77b973ea777fa0a3e26de6a1f9fb4a5c2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:49.817195 kubelet[3412]: E1213 01:27:49.817144 3412 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d1236eb14042678ee890b48f5cab70f77b973ea777fa0a3e26de6a1f9fb4a5c2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d1236eb14042678ee890b48f5cab70f77b973ea777fa0a3e26de6a1f9fb4a5c2" Dec 13 01:27:49.817307 kubelet[3412]: E1213 01:27:49.817221 3412 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d1236eb14042678ee890b48f5cab70f77b973ea777fa0a3e26de6a1f9fb4a5c2"} Dec 13 01:27:49.817307 kubelet[3412]: E1213 01:27:49.817256 3412 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a1d94ea8-bc82-4759-bbe6-96e9a3a3933f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d1236eb14042678ee890b48f5cab70f77b973ea777fa0a3e26de6a1f9fb4a5c2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:27:49.817307 kubelet[3412]: E1213 01:27:49.817282 3412 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a1d94ea8-bc82-4759-bbe6-96e9a3a3933f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d1236eb14042678ee890b48f5cab70f77b973ea777fa0a3e26de6a1f9fb4a5c2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-7hccp" podUID="a1d94ea8-bc82-4759-bbe6-96e9a3a3933f" Dec 13 01:27:49.817443 containerd[1811]: time="2024-12-13T01:27:49.817405145Z" level=error msg="StopPodSandbox for \"fc5fef736f525172af5a285d617822b0d39d8c6a76efa09d2182ea831f81aaa8\" failed" error="failed to destroy network for sandbox \"fc5fef736f525172af5a285d617822b0d39d8c6a76efa09d2182ea831f81aaa8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:49.817593 kubelet[3412]: E1213 01:27:49.817566 3412 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fc5fef736f525172af5a285d617822b0d39d8c6a76efa09d2182ea831f81aaa8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fc5fef736f525172af5a285d617822b0d39d8c6a76efa09d2182ea831f81aaa8" Dec 13 01:27:49.817639 kubelet[3412]: E1213 01:27:49.817592 3412 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fc5fef736f525172af5a285d617822b0d39d8c6a76efa09d2182ea831f81aaa8"} Dec 13 01:27:49.817639 kubelet[3412]: E1213 01:27:49.817623 3412 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"56a8139e-d217-4073-b397-5d26c40f4540\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fc5fef736f525172af5a285d617822b0d39d8c6a76efa09d2182ea831f81aaa8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:27:49.817711 kubelet[3412]: E1213 01:27:49.817648 3412 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"56a8139e-d217-4073-b397-5d26c40f4540\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fc5fef736f525172af5a285d617822b0d39d8c6a76efa09d2182ea831f81aaa8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-64b868bd8-nmv69" podUID="56a8139e-d217-4073-b397-5d26c40f4540" Dec 13 01:27:49.819653 containerd[1811]: time="2024-12-13T01:27:49.819609988Z" level=error msg="StopPodSandbox for \"ff36528399291c8c0fd75688fcbf6a6ab9e4d4be343ecf7d46bec379d5474ede\" failed" error="failed to destroy network for sandbox \"ff36528399291c8c0fd75688fcbf6a6ab9e4d4be343ecf7d46bec379d5474ede\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:27:49.819806 kubelet[3412]: E1213 01:27:49.819784 3412 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ff36528399291c8c0fd75688fcbf6a6ab9e4d4be343ecf7d46bec379d5474ede\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ff36528399291c8c0fd75688fcbf6a6ab9e4d4be343ecf7d46bec379d5474ede" Dec 13 01:27:49.819851 kubelet[3412]: E1213 01:27:49.819814 3412 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ff36528399291c8c0fd75688fcbf6a6ab9e4d4be343ecf7d46bec379d5474ede"} Dec 13 01:27:49.819888 kubelet[3412]: E1213 01:27:49.819874 3412 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"630b6a63-ec70-4f62-be70-0062014125b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ff36528399291c8c0fd75688fcbf6a6ab9e4d4be343ecf7d46bec379d5474ede\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:27:49.819935 kubelet[3412]: E1213 01:27:49.819908 3412 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"630b6a63-ec70-4f62-be70-0062014125b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ff36528399291c8c0fd75688fcbf6a6ab9e4d4be343ecf7d46bec379d5474ede\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8659b5f7c7-t7g7q" podUID="630b6a63-ec70-4f62-be70-0062014125b5" Dec 13 01:27:50.297017 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ff36528399291c8c0fd75688fcbf6a6ab9e4d4be343ecf7d46bec379d5474ede-shm.mount: Deactivated successfully. Dec 13 01:27:50.297148 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d1236eb14042678ee890b48f5cab70f77b973ea777fa0a3e26de6a1f9fb4a5c2-shm.mount: Deactivated successfully. Dec 13 01:27:50.297327 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-165b55e6a7a58a195b5814a22ddbbb477c844710b5b2d51b438e7992e1e69372-shm.mount: Deactivated successfully. Dec 13 01:27:55.769542 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2981948602.mount: Deactivated successfully. Dec 13 01:27:56.070588 containerd[1811]: time="2024-12-13T01:27:56.070348015Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:56.072416 containerd[1811]: time="2024-12-13T01:27:56.072286378Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Dec 13 01:27:56.076289 containerd[1811]: time="2024-12-13T01:27:56.076228024Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:56.080257 containerd[1811]: time="2024-12-13T01:27:56.080201190Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:27:56.081152 containerd[1811]: time="2024-12-13T01:27:56.080705351Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 6.366164438s" Dec 13 01:27:56.081152 containerd[1811]: time="2024-12-13T01:27:56.080740231Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Dec 13 01:27:56.094460 containerd[1811]: time="2024-12-13T01:27:56.094013010Z" level=info msg="CreateContainer within sandbox \"3e3610026d33fe946ad03e72b17361662039886f887d8a6b39bb917a68cef736\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 13 01:27:56.123654 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount337321230.mount: Deactivated successfully. Dec 13 01:27:56.135970 containerd[1811]: time="2024-12-13T01:27:56.135926152Z" level=info msg="CreateContainer within sandbox \"3e3610026d33fe946ad03e72b17361662039886f887d8a6b39bb917a68cef736\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"ffb56cdebee69e0372049f5b3cba418e4d898d28e90a43bf98da93717d90fdfa\"" Dec 13 01:27:56.136750 containerd[1811]: time="2024-12-13T01:27:56.136713154Z" level=info msg="StartContainer for \"ffb56cdebee69e0372049f5b3cba418e4d898d28e90a43bf98da93717d90fdfa\"" Dec 13 01:27:56.186448 containerd[1811]: time="2024-12-13T01:27:56.186410547Z" level=info msg="StartContainer for \"ffb56cdebee69e0372049f5b3cba418e4d898d28e90a43bf98da93717d90fdfa\" returns successfully" Dec 13 01:27:56.473427 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 13 01:27:56.473751 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 13 01:27:56.790476 kubelet[3412]: I1213 01:27:56.790312 3412 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-h84g5" podStartSLOduration=1.834208292 podStartE2EDuration="19.790272084s" podCreationTimestamp="2024-12-13 01:27:37 +0000 UTC" firstStartedPulling="2024-12-13 01:27:38.124941039 +0000 UTC m=+21.694478247" lastFinishedPulling="2024-12-13 01:27:56.081004831 +0000 UTC m=+39.650542039" observedRunningTime="2024-12-13 01:27:56.789901083 +0000 UTC m=+40.359438291" watchObservedRunningTime="2024-12-13 01:27:56.790272084 +0000 UTC m=+40.359809292" Dec 13 01:27:59.527368 kubelet[3412]: I1213 01:27:59.527323 3412 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:27:59.613150 systemd[1]: run-containerd-runc-k8s.io-ffb56cdebee69e0372049f5b3cba418e4d898d28e90a43bf98da93717d90fdfa-runc.484Kat.mount: Deactivated successfully. Dec 13 01:28:00.008750 kubelet[3412]: I1213 01:28:00.007683 3412 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:28:00.581559 containerd[1811]: time="2024-12-13T01:28:00.581244336Z" level=info msg="StopPodSandbox for \"165b55e6a7a58a195b5814a22ddbbb477c844710b5b2d51b438e7992e1e69372\"" Dec 13 01:28:00.676096 containerd[1811]: 2024-12-13 01:28:00.639 [INFO][4651] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="165b55e6a7a58a195b5814a22ddbbb477c844710b5b2d51b438e7992e1e69372" Dec 13 01:28:00.676096 containerd[1811]: 2024-12-13 01:28:00.640 [INFO][4651] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="165b55e6a7a58a195b5814a22ddbbb477c844710b5b2d51b438e7992e1e69372" iface="eth0" netns="/var/run/netns/cni-669c8923-82b9-e4ec-6da1-cf3d6aef9b77" Dec 13 01:28:00.676096 containerd[1811]: 2024-12-13 01:28:00.641 [INFO][4651] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="165b55e6a7a58a195b5814a22ddbbb477c844710b5b2d51b438e7992e1e69372" iface="eth0" netns="/var/run/netns/cni-669c8923-82b9-e4ec-6da1-cf3d6aef9b77" Dec 13 01:28:00.676096 containerd[1811]: 2024-12-13 01:28:00.641 [INFO][4651] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="165b55e6a7a58a195b5814a22ddbbb477c844710b5b2d51b438e7992e1e69372" iface="eth0" netns="/var/run/netns/cni-669c8923-82b9-e4ec-6da1-cf3d6aef9b77" Dec 13 01:28:00.676096 containerd[1811]: 2024-12-13 01:28:00.641 [INFO][4651] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="165b55e6a7a58a195b5814a22ddbbb477c844710b5b2d51b438e7992e1e69372" Dec 13 01:28:00.676096 containerd[1811]: 2024-12-13 01:28:00.641 [INFO][4651] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="165b55e6a7a58a195b5814a22ddbbb477c844710b5b2d51b438e7992e1e69372" Dec 13 01:28:00.676096 containerd[1811]: 2024-12-13 01:28:00.662 [INFO][4657] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="165b55e6a7a58a195b5814a22ddbbb477c844710b5b2d51b438e7992e1e69372" HandleID="k8s-pod-network.165b55e6a7a58a195b5814a22ddbbb477c844710b5b2d51b438e7992e1e69372" Workload="ci--4081.2.1--a--a2790899e3-k8s-coredns--76f75df574--h7l4p-eth0" Dec 13 01:28:00.676096 containerd[1811]: 2024-12-13 01:28:00.662 [INFO][4657] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:00.676096 containerd[1811]: 2024-12-13 01:28:00.662 [INFO][4657] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:00.676096 containerd[1811]: 2024-12-13 01:28:00.671 [WARNING][4657] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="165b55e6a7a58a195b5814a22ddbbb477c844710b5b2d51b438e7992e1e69372" HandleID="k8s-pod-network.165b55e6a7a58a195b5814a22ddbbb477c844710b5b2d51b438e7992e1e69372" Workload="ci--4081.2.1--a--a2790899e3-k8s-coredns--76f75df574--h7l4p-eth0" Dec 13 01:28:00.676096 containerd[1811]: 2024-12-13 01:28:00.671 [INFO][4657] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="165b55e6a7a58a195b5814a22ddbbb477c844710b5b2d51b438e7992e1e69372" HandleID="k8s-pod-network.165b55e6a7a58a195b5814a22ddbbb477c844710b5b2d51b438e7992e1e69372" Workload="ci--4081.2.1--a--a2790899e3-k8s-coredns--76f75df574--h7l4p-eth0" Dec 13 01:28:00.676096 containerd[1811]: 2024-12-13 01:28:00.673 [INFO][4657] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:00.676096 containerd[1811]: 2024-12-13 01:28:00.674 [INFO][4651] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="165b55e6a7a58a195b5814a22ddbbb477c844710b5b2d51b438e7992e1e69372" Dec 13 01:28:00.677521 containerd[1811]: time="2024-12-13T01:28:00.676296008Z" level=info msg="TearDown network for sandbox \"165b55e6a7a58a195b5814a22ddbbb477c844710b5b2d51b438e7992e1e69372\" successfully" Dec 13 01:28:00.677521 containerd[1811]: time="2024-12-13T01:28:00.676333048Z" level=info msg="StopPodSandbox for \"165b55e6a7a58a195b5814a22ddbbb477c844710b5b2d51b438e7992e1e69372\" returns successfully" Dec 13 01:28:00.679974 systemd[1]: run-netns-cni\x2d669c8923\x2d82b9\x2de4ec\x2d6da1\x2dcf3d6aef9b77.mount: Deactivated successfully. Dec 13 01:28:00.683523 containerd[1811]: time="2024-12-13T01:28:00.683481342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-h7l4p,Uid:615b4d62-a251-4f67-a6ae-4331125f9266,Namespace:kube-system,Attempt:1,}" Dec 13 01:28:01.320367 systemd-networkd[1382]: calidbf8d2ccca6: Link UP Dec 13 01:28:01.321399 systemd-networkd[1382]: calidbf8d2ccca6: Gained carrier Dec 13 01:28:01.341547 containerd[1811]: 2024-12-13 01:28:00.767 [INFO][4664] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 01:28:01.341547 containerd[1811]: 2024-12-13 01:28:00.783 [INFO][4664] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.1--a--a2790899e3-k8s-coredns--76f75df574--h7l4p-eth0 coredns-76f75df574- kube-system 615b4d62-a251-4f67-a6ae-4331125f9266 782 0 2024-12-13 01:27:30 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.2.1-a-a2790899e3 coredns-76f75df574-h7l4p eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calidbf8d2ccca6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="931826f6934cc4937ad3ec70d28a0385ccafee6b640de96becfe900844eebef8" Namespace="kube-system" Pod="coredns-76f75df574-h7l4p" WorkloadEndpoint="ci--4081.2.1--a--a2790899e3-k8s-coredns--76f75df574--h7l4p-" Dec 13 01:28:01.341547 containerd[1811]: 2024-12-13 01:28:00.784 [INFO][4664] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="931826f6934cc4937ad3ec70d28a0385ccafee6b640de96becfe900844eebef8" Namespace="kube-system" Pod="coredns-76f75df574-h7l4p" WorkloadEndpoint="ci--4081.2.1--a--a2790899e3-k8s-coredns--76f75df574--h7l4p-eth0" Dec 13 01:28:01.341547 containerd[1811]: 2024-12-13 01:28:00.812 [INFO][4676] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="931826f6934cc4937ad3ec70d28a0385ccafee6b640de96becfe900844eebef8" HandleID="k8s-pod-network.931826f6934cc4937ad3ec70d28a0385ccafee6b640de96becfe900844eebef8" Workload="ci--4081.2.1--a--a2790899e3-k8s-coredns--76f75df574--h7l4p-eth0" Dec 13 01:28:01.341547 containerd[1811]: 2024-12-13 01:28:00.824 [INFO][4676] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="931826f6934cc4937ad3ec70d28a0385ccafee6b640de96becfe900844eebef8" HandleID="k8s-pod-network.931826f6934cc4937ad3ec70d28a0385ccafee6b640de96becfe900844eebef8" Workload="ci--4081.2.1--a--a2790899e3-k8s-coredns--76f75df574--h7l4p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002214b0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.2.1-a-a2790899e3", "pod":"coredns-76f75df574-h7l4p", "timestamp":"2024-12-13 01:28:00.812499082 +0000 UTC"}, Hostname:"ci-4081.2.1-a-a2790899e3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:28:01.341547 containerd[1811]: 2024-12-13 01:28:00.824 [INFO][4676] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:01.341547 containerd[1811]: 2024-12-13 01:28:00.824 [INFO][4676] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:01.341547 containerd[1811]: 2024-12-13 01:28:00.824 [INFO][4676] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.1-a-a2790899e3' Dec 13 01:28:01.341547 containerd[1811]: 2024-12-13 01:28:00.826 [INFO][4676] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.931826f6934cc4937ad3ec70d28a0385ccafee6b640de96becfe900844eebef8" host="ci-4081.2.1-a-a2790899e3" Dec 13 01:28:01.341547 containerd[1811]: 2024-12-13 01:28:00.829 [INFO][4676] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.1-a-a2790899e3" Dec 13 01:28:01.341547 containerd[1811]: 2024-12-13 01:28:00.833 [INFO][4676] ipam/ipam.go 489: Trying affinity for 192.168.79.64/26 host="ci-4081.2.1-a-a2790899e3" Dec 13 01:28:01.341547 containerd[1811]: 2024-12-13 01:28:00.835 [INFO][4676] ipam/ipam.go 155: Attempting to load block cidr=192.168.79.64/26 host="ci-4081.2.1-a-a2790899e3" Dec 13 01:28:01.341547 containerd[1811]: 2024-12-13 01:28:00.837 [INFO][4676] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.79.64/26 host="ci-4081.2.1-a-a2790899e3" Dec 13 01:28:01.341547 containerd[1811]: 2024-12-13 01:28:00.837 [INFO][4676] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.79.64/26 handle="k8s-pod-network.931826f6934cc4937ad3ec70d28a0385ccafee6b640de96becfe900844eebef8" host="ci-4081.2.1-a-a2790899e3" Dec 13 01:28:01.341547 containerd[1811]: 2024-12-13 01:28:00.838 [INFO][4676] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.931826f6934cc4937ad3ec70d28a0385ccafee6b640de96becfe900844eebef8 Dec 13 01:28:01.341547 containerd[1811]: 2024-12-13 01:28:00.844 [INFO][4676] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.79.64/26 handle="k8s-pod-network.931826f6934cc4937ad3ec70d28a0385ccafee6b640de96becfe900844eebef8" host="ci-4081.2.1-a-a2790899e3" Dec 13 01:28:01.341547 containerd[1811]: 2024-12-13 01:28:00.853 [INFO][4676] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.79.65/26] block=192.168.79.64/26 handle="k8s-pod-network.931826f6934cc4937ad3ec70d28a0385ccafee6b640de96becfe900844eebef8" host="ci-4081.2.1-a-a2790899e3" Dec 13 01:28:01.341547 containerd[1811]: 2024-12-13 01:28:00.853 [INFO][4676] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.79.65/26] handle="k8s-pod-network.931826f6934cc4937ad3ec70d28a0385ccafee6b640de96becfe900844eebef8" host="ci-4081.2.1-a-a2790899e3" Dec 13 01:28:01.341547 containerd[1811]: 2024-12-13 01:28:00.853 [INFO][4676] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:01.341547 containerd[1811]: 2024-12-13 01:28:00.853 [INFO][4676] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.79.65/26] IPv6=[] ContainerID="931826f6934cc4937ad3ec70d28a0385ccafee6b640de96becfe900844eebef8" HandleID="k8s-pod-network.931826f6934cc4937ad3ec70d28a0385ccafee6b640de96becfe900844eebef8" Workload="ci--4081.2.1--a--a2790899e3-k8s-coredns--76f75df574--h7l4p-eth0" Dec 13 01:28:01.342224 containerd[1811]: 2024-12-13 01:28:00.856 [INFO][4664] cni-plugin/k8s.go 386: Populated endpoint ContainerID="931826f6934cc4937ad3ec70d28a0385ccafee6b640de96becfe900844eebef8" Namespace="kube-system" Pod="coredns-76f75df574-h7l4p" WorkloadEndpoint="ci--4081.2.1--a--a2790899e3-k8s-coredns--76f75df574--h7l4p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--a2790899e3-k8s-coredns--76f75df574--h7l4p-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"615b4d62-a251-4f67-a6ae-4331125f9266", ResourceVersion:"782", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 27, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-a2790899e3", ContainerID:"", Pod:"coredns-76f75df574-h7l4p", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.79.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidbf8d2ccca6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:01.342224 containerd[1811]: 2024-12-13 01:28:00.856 [INFO][4664] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.79.65/32] ContainerID="931826f6934cc4937ad3ec70d28a0385ccafee6b640de96becfe900844eebef8" Namespace="kube-system" Pod="coredns-76f75df574-h7l4p" WorkloadEndpoint="ci--4081.2.1--a--a2790899e3-k8s-coredns--76f75df574--h7l4p-eth0" Dec 13 01:28:01.342224 containerd[1811]: 2024-12-13 01:28:00.856 [INFO][4664] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidbf8d2ccca6 ContainerID="931826f6934cc4937ad3ec70d28a0385ccafee6b640de96becfe900844eebef8" Namespace="kube-system" Pod="coredns-76f75df574-h7l4p" WorkloadEndpoint="ci--4081.2.1--a--a2790899e3-k8s-coredns--76f75df574--h7l4p-eth0" Dec 13 01:28:01.342224 containerd[1811]: 2024-12-13 01:28:01.321 [INFO][4664] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="931826f6934cc4937ad3ec70d28a0385ccafee6b640de96becfe900844eebef8" Namespace="kube-system" Pod="coredns-76f75df574-h7l4p" WorkloadEndpoint="ci--4081.2.1--a--a2790899e3-k8s-coredns--76f75df574--h7l4p-eth0" Dec 13 01:28:01.342224 containerd[1811]: 2024-12-13 01:28:01.321 [INFO][4664] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="931826f6934cc4937ad3ec70d28a0385ccafee6b640de96becfe900844eebef8" Namespace="kube-system" Pod="coredns-76f75df574-h7l4p" WorkloadEndpoint="ci--4081.2.1--a--a2790899e3-k8s-coredns--76f75df574--h7l4p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--a2790899e3-k8s-coredns--76f75df574--h7l4p-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"615b4d62-a251-4f67-a6ae-4331125f9266", ResourceVersion:"782", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 27, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-a2790899e3", ContainerID:"931826f6934cc4937ad3ec70d28a0385ccafee6b640de96becfe900844eebef8", Pod:"coredns-76f75df574-h7l4p", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.79.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidbf8d2ccca6", MAC:"6a:f2:fd:68:56:1b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:01.342224 containerd[1811]: 2024-12-13 01:28:01.337 [INFO][4664] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="931826f6934cc4937ad3ec70d28a0385ccafee6b640de96becfe900844eebef8" Namespace="kube-system" Pod="coredns-76f75df574-h7l4p" WorkloadEndpoint="ci--4081.2.1--a--a2790899e3-k8s-coredns--76f75df574--h7l4p-eth0" Dec 13 01:28:01.422339 containerd[1811]: time="2024-12-13T01:28:01.421879350Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:28:01.422475 containerd[1811]: time="2024-12-13T01:28:01.422380991Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:28:01.422475 containerd[1811]: time="2024-12-13T01:28:01.422444871Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:28:01.424365 containerd[1811]: time="2024-12-13T01:28:01.424144075Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:28:01.449330 kernel: bpftool[4770]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Dec 13 01:28:01.490086 containerd[1811]: time="2024-12-13T01:28:01.490010607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-h7l4p,Uid:615b4d62-a251-4f67-a6ae-4331125f9266,Namespace:kube-system,Attempt:1,} returns sandbox id \"931826f6934cc4937ad3ec70d28a0385ccafee6b640de96becfe900844eebef8\"" Dec 13 01:28:01.494225 containerd[1811]: time="2024-12-13T01:28:01.493311134Z" level=info msg="CreateContainer within sandbox \"931826f6934cc4937ad3ec70d28a0385ccafee6b640de96becfe900844eebef8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:28:01.604228 containerd[1811]: time="2024-12-13T01:28:01.581079391Z" level=info msg="StopPodSandbox for \"ff36528399291c8c0fd75688fcbf6a6ab9e4d4be343ecf7d46bec379d5474ede\"" Dec 13 01:28:01.604228 containerd[1811]: time="2024-12-13T01:28:01.583014195Z" level=info msg="StopPodSandbox for \"b341f141d5fd5be20330ae636a63a269b1fb69a554b6a3f4bfe12feb6af4e806\"" Dec 13 01:28:01.734559 containerd[1811]: 2024-12-13 01:28:01.679 [INFO][4829] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b341f141d5fd5be20330ae636a63a269b1fb69a554b6a3f4bfe12feb6af4e806" Dec 13 01:28:01.734559 containerd[1811]: 2024-12-13 01:28:01.681 [INFO][4829] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b341f141d5fd5be20330ae636a63a269b1fb69a554b6a3f4bfe12feb6af4e806" iface="eth0" netns="/var/run/netns/cni-3e7df4b4-5584-563c-bea0-7dd4cc79fb12" Dec 13 01:28:01.734559 containerd[1811]: 2024-12-13 01:28:01.682 [INFO][4829] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b341f141d5fd5be20330ae636a63a269b1fb69a554b6a3f4bfe12feb6af4e806" iface="eth0" netns="/var/run/netns/cni-3e7df4b4-5584-563c-bea0-7dd4cc79fb12" Dec 13 01:28:01.734559 containerd[1811]: 2024-12-13 01:28:01.683 [INFO][4829] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b341f141d5fd5be20330ae636a63a269b1fb69a554b6a3f4bfe12feb6af4e806" iface="eth0" netns="/var/run/netns/cni-3e7df4b4-5584-563c-bea0-7dd4cc79fb12" Dec 13 01:28:01.734559 containerd[1811]: 2024-12-13 01:28:01.683 [INFO][4829] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b341f141d5fd5be20330ae636a63a269b1fb69a554b6a3f4bfe12feb6af4e806" Dec 13 01:28:01.734559 containerd[1811]: 2024-12-13 01:28:01.683 [INFO][4829] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b341f141d5fd5be20330ae636a63a269b1fb69a554b6a3f4bfe12feb6af4e806" Dec 13 01:28:01.734559 containerd[1811]: 2024-12-13 01:28:01.713 [INFO][4847] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b341f141d5fd5be20330ae636a63a269b1fb69a554b6a3f4bfe12feb6af4e806" HandleID="k8s-pod-network.b341f141d5fd5be20330ae636a63a269b1fb69a554b6a3f4bfe12feb6af4e806" Workload="ci--4081.2.1--a--a2790899e3-k8s-csi--node--driver--8xjpw-eth0" Dec 13 01:28:01.734559 containerd[1811]: 2024-12-13 01:28:01.713 [INFO][4847] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:01.734559 containerd[1811]: 2024-12-13 01:28:01.713 [INFO][4847] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:01.734559 containerd[1811]: 2024-12-13 01:28:01.725 [WARNING][4847] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b341f141d5fd5be20330ae636a63a269b1fb69a554b6a3f4bfe12feb6af4e806" HandleID="k8s-pod-network.b341f141d5fd5be20330ae636a63a269b1fb69a554b6a3f4bfe12feb6af4e806" Workload="ci--4081.2.1--a--a2790899e3-k8s-csi--node--driver--8xjpw-eth0" Dec 13 01:28:01.734559 containerd[1811]: 2024-12-13 01:28:01.725 [INFO][4847] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b341f141d5fd5be20330ae636a63a269b1fb69a554b6a3f4bfe12feb6af4e806" HandleID="k8s-pod-network.b341f141d5fd5be20330ae636a63a269b1fb69a554b6a3f4bfe12feb6af4e806" Workload="ci--4081.2.1--a--a2790899e3-k8s-csi--node--driver--8xjpw-eth0" Dec 13 01:28:01.734559 containerd[1811]: 2024-12-13 01:28:01.727 [INFO][4847] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:01.734559 containerd[1811]: 2024-12-13 01:28:01.728 [INFO][4829] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b341f141d5fd5be20330ae636a63a269b1fb69a554b6a3f4bfe12feb6af4e806" Dec 13 01:28:01.734559 containerd[1811]: time="2024-12-13T01:28:01.730114531Z" level=info msg="TearDown network for sandbox \"b341f141d5fd5be20330ae636a63a269b1fb69a554b6a3f4bfe12feb6af4e806\" successfully" Dec 13 01:28:01.734559 containerd[1811]: time="2024-12-13T01:28:01.730140771Z" level=info msg="StopPodSandbox for \"b341f141d5fd5be20330ae636a63a269b1fb69a554b6a3f4bfe12feb6af4e806\" returns successfully" Dec 13 01:28:01.734559 containerd[1811]: time="2024-12-13T01:28:01.730860373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8xjpw,Uid:31a49e50-08a6-4d91-bf9f-b4d8e0e1e065,Namespace:calico-system,Attempt:1,}" Dec 13 01:28:01.735536 systemd[1]: run-netns-cni\x2d3e7df4b4\x2d5584\x2d563c\x2dbea0\x2d7dd4cc79fb12.mount: Deactivated successfully. Dec 13 01:28:01.750667 containerd[1811]: 2024-12-13 01:28:01.698 [INFO][4828] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ff36528399291c8c0fd75688fcbf6a6ab9e4d4be343ecf7d46bec379d5474ede" Dec 13 01:28:01.750667 containerd[1811]: 2024-12-13 01:28:01.699 [INFO][4828] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ff36528399291c8c0fd75688fcbf6a6ab9e4d4be343ecf7d46bec379d5474ede" iface="eth0" netns="/var/run/netns/cni-a2fa7798-a0fc-e274-b74b-8942d75c8cdd" Dec 13 01:28:01.750667 containerd[1811]: 2024-12-13 01:28:01.699 [INFO][4828] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ff36528399291c8c0fd75688fcbf6a6ab9e4d4be343ecf7d46bec379d5474ede" iface="eth0" netns="/var/run/netns/cni-a2fa7798-a0fc-e274-b74b-8942d75c8cdd" Dec 13 01:28:01.750667 containerd[1811]: 2024-12-13 01:28:01.700 [INFO][4828] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ff36528399291c8c0fd75688fcbf6a6ab9e4d4be343ecf7d46bec379d5474ede" iface="eth0" netns="/var/run/netns/cni-a2fa7798-a0fc-e274-b74b-8942d75c8cdd" Dec 13 01:28:01.750667 containerd[1811]: 2024-12-13 01:28:01.700 [INFO][4828] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ff36528399291c8c0fd75688fcbf6a6ab9e4d4be343ecf7d46bec379d5474ede" Dec 13 01:28:01.750667 containerd[1811]: 2024-12-13 01:28:01.700 [INFO][4828] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ff36528399291c8c0fd75688fcbf6a6ab9e4d4be343ecf7d46bec379d5474ede" Dec 13 01:28:01.750667 containerd[1811]: 2024-12-13 01:28:01.736 [INFO][4851] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ff36528399291c8c0fd75688fcbf6a6ab9e4d4be343ecf7d46bec379d5474ede" HandleID="k8s-pod-network.ff36528399291c8c0fd75688fcbf6a6ab9e4d4be343ecf7d46bec379d5474ede" Workload="ci--4081.2.1--a--a2790899e3-k8s-calico--apiserver--8659b5f7c7--t7g7q-eth0" Dec 13 01:28:01.750667 containerd[1811]: 2024-12-13 01:28:01.737 [INFO][4851] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:01.750667 containerd[1811]: 2024-12-13 01:28:01.737 [INFO][4851] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:01.750667 containerd[1811]: 2024-12-13 01:28:01.745 [WARNING][4851] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ff36528399291c8c0fd75688fcbf6a6ab9e4d4be343ecf7d46bec379d5474ede" HandleID="k8s-pod-network.ff36528399291c8c0fd75688fcbf6a6ab9e4d4be343ecf7d46bec379d5474ede" Workload="ci--4081.2.1--a--a2790899e3-k8s-calico--apiserver--8659b5f7c7--t7g7q-eth0" Dec 13 01:28:01.750667 containerd[1811]: 2024-12-13 01:28:01.745 [INFO][4851] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ff36528399291c8c0fd75688fcbf6a6ab9e4d4be343ecf7d46bec379d5474ede" HandleID="k8s-pod-network.ff36528399291c8c0fd75688fcbf6a6ab9e4d4be343ecf7d46bec379d5474ede" Workload="ci--4081.2.1--a--a2790899e3-k8s-calico--apiserver--8659b5f7c7--t7g7q-eth0" Dec 13 01:28:01.750667 containerd[1811]: 2024-12-13 01:28:01.747 [INFO][4851] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:01.750667 containerd[1811]: 2024-12-13 01:28:01.748 [INFO][4828] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ff36528399291c8c0fd75688fcbf6a6ab9e4d4be343ecf7d46bec379d5474ede" Dec 13 01:28:01.753512 containerd[1811]: time="2024-12-13T01:28:01.750854773Z" level=info msg="TearDown network for sandbox \"ff36528399291c8c0fd75688fcbf6a6ab9e4d4be343ecf7d46bec379d5474ede\" successfully" Dec 13 01:28:01.753512 containerd[1811]: time="2024-12-13T01:28:01.750884613Z" level=info msg="StopPodSandbox for \"ff36528399291c8c0fd75688fcbf6a6ab9e4d4be343ecf7d46bec379d5474ede\" returns successfully" Dec 13 01:28:01.753512 containerd[1811]: time="2024-12-13T01:28:01.753144537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8659b5f7c7-t7g7q,Uid:630b6a63-ec70-4f62-be70-0062014125b5,Namespace:calico-apiserver,Attempt:1,}" Dec 13 01:28:01.754038 systemd[1]: run-netns-cni\x2da2fa7798\x2da0fc\x2de274\x2db74b\x2d8942d75c8cdd.mount: Deactivated successfully. Dec 13 01:28:01.799814 systemd-networkd[1382]: vxlan.calico: Link UP Dec 13 01:28:01.799827 systemd-networkd[1382]: vxlan.calico: Gained carrier Dec 13 01:28:02.017568 containerd[1811]: time="2024-12-13T01:28:02.016613148Z" level=info msg="CreateContainer within sandbox \"931826f6934cc4937ad3ec70d28a0385ccafee6b640de96becfe900844eebef8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"976576cac1be8c752f25ba6d557640126e353bd56dff64af1e36620c10f3b5a5\"" Dec 13 01:28:02.019153 containerd[1811]: time="2024-12-13T01:28:02.019071793Z" level=info msg="StartContainer for \"976576cac1be8c752f25ba6d557640126e353bd56dff64af1e36620c10f3b5a5\"" Dec 13 01:28:02.124108 containerd[1811]: time="2024-12-13T01:28:02.124026205Z" level=info msg="StartContainer for \"976576cac1be8c752f25ba6d557640126e353bd56dff64af1e36620c10f3b5a5\" returns successfully" Dec 13 01:28:02.338600 systemd-networkd[1382]: cali08417ee6a5c: Link UP Dec 13 01:28:02.340641 systemd-networkd[1382]: cali08417ee6a5c: Gained carrier Dec 13 01:28:02.373264 containerd[1811]: 2024-12-13 01:28:02.240 [INFO][4943] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.1--a--a2790899e3-k8s-csi--node--driver--8xjpw-eth0 csi-node-driver- calico-system 31a49e50-08a6-4d91-bf9f-b4d8e0e1e065 791 0 2024-12-13 01:27:37 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b695c467 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.2.1-a-a2790899e3 csi-node-driver-8xjpw eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali08417ee6a5c [] []}} ContainerID="2420f2fd95e3c2d567d405034838b78e4332347381e3ebe733f8fab598e17825" Namespace="calico-system" Pod="csi-node-driver-8xjpw" WorkloadEndpoint="ci--4081.2.1--a--a2790899e3-k8s-csi--node--driver--8xjpw-" Dec 13 01:28:02.373264 containerd[1811]: 2024-12-13 01:28:02.240 [INFO][4943] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2420f2fd95e3c2d567d405034838b78e4332347381e3ebe733f8fab598e17825" Namespace="calico-system" Pod="csi-node-driver-8xjpw" WorkloadEndpoint="ci--4081.2.1--a--a2790899e3-k8s-csi--node--driver--8xjpw-eth0" Dec 13 01:28:02.373264 containerd[1811]: 2024-12-13 01:28:02.289 [INFO][4974] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2420f2fd95e3c2d567d405034838b78e4332347381e3ebe733f8fab598e17825" HandleID="k8s-pod-network.2420f2fd95e3c2d567d405034838b78e4332347381e3ebe733f8fab598e17825" Workload="ci--4081.2.1--a--a2790899e3-k8s-csi--node--driver--8xjpw-eth0" Dec 13 01:28:02.373264 containerd[1811]: 2024-12-13 01:28:02.306 [INFO][4974] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2420f2fd95e3c2d567d405034838b78e4332347381e3ebe733f8fab598e17825" HandleID="k8s-pod-network.2420f2fd95e3c2d567d405034838b78e4332347381e3ebe733f8fab598e17825" Workload="ci--4081.2.1--a--a2790899e3-k8s-csi--node--driver--8xjpw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000220b70), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.2.1-a-a2790899e3", "pod":"csi-node-driver-8xjpw", "timestamp":"2024-12-13 01:28:02.289438698 +0000 UTC"}, Hostname:"ci-4081.2.1-a-a2790899e3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:28:02.373264 containerd[1811]: 2024-12-13 01:28:02.306 [INFO][4974] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:02.373264 containerd[1811]: 2024-12-13 01:28:02.307 [INFO][4974] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:02.373264 containerd[1811]: 2024-12-13 01:28:02.307 [INFO][4974] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.1-a-a2790899e3' Dec 13 01:28:02.373264 containerd[1811]: 2024-12-13 01:28:02.308 [INFO][4974] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2420f2fd95e3c2d567d405034838b78e4332347381e3ebe733f8fab598e17825" host="ci-4081.2.1-a-a2790899e3" Dec 13 01:28:02.373264 containerd[1811]: 2024-12-13 01:28:02.311 [INFO][4974] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.1-a-a2790899e3" Dec 13 01:28:02.373264 containerd[1811]: 2024-12-13 01:28:02.315 [INFO][4974] ipam/ipam.go 489: Trying affinity for 192.168.79.64/26 host="ci-4081.2.1-a-a2790899e3" Dec 13 01:28:02.373264 containerd[1811]: 2024-12-13 01:28:02.316 [INFO][4974] ipam/ipam.go 155: Attempting to load block cidr=192.168.79.64/26 host="ci-4081.2.1-a-a2790899e3" Dec 13 01:28:02.373264 containerd[1811]: 2024-12-13 01:28:02.318 [INFO][4974] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.79.64/26 host="ci-4081.2.1-a-a2790899e3" Dec 13 01:28:02.373264 containerd[1811]: 2024-12-13 01:28:02.318 [INFO][4974] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.79.64/26 handle="k8s-pod-network.2420f2fd95e3c2d567d405034838b78e4332347381e3ebe733f8fab598e17825" host="ci-4081.2.1-a-a2790899e3" Dec 13 01:28:02.373264 containerd[1811]: 2024-12-13 01:28:02.319 [INFO][4974] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2420f2fd95e3c2d567d405034838b78e4332347381e3ebe733f8fab598e17825 Dec 13 01:28:02.373264 containerd[1811]: 2024-12-13 01:28:02.324 [INFO][4974] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.79.64/26 handle="k8s-pod-network.2420f2fd95e3c2d567d405034838b78e4332347381e3ebe733f8fab598e17825" host="ci-4081.2.1-a-a2790899e3" Dec 13 01:28:02.373264 containerd[1811]: 2024-12-13 01:28:02.332 [INFO][4974] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.79.66/26] block=192.168.79.64/26 handle="k8s-pod-network.2420f2fd95e3c2d567d405034838b78e4332347381e3ebe733f8fab598e17825" host="ci-4081.2.1-a-a2790899e3" Dec 13 01:28:02.373264 containerd[1811]: 2024-12-13 01:28:02.332 [INFO][4974] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.79.66/26] handle="k8s-pod-network.2420f2fd95e3c2d567d405034838b78e4332347381e3ebe733f8fab598e17825" host="ci-4081.2.1-a-a2790899e3" Dec 13 01:28:02.373264 containerd[1811]: 2024-12-13 01:28:02.332 [INFO][4974] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:02.373264 containerd[1811]: 2024-12-13 01:28:02.332 [INFO][4974] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.79.66/26] IPv6=[] ContainerID="2420f2fd95e3c2d567d405034838b78e4332347381e3ebe733f8fab598e17825" HandleID="k8s-pod-network.2420f2fd95e3c2d567d405034838b78e4332347381e3ebe733f8fab598e17825" Workload="ci--4081.2.1--a--a2790899e3-k8s-csi--node--driver--8xjpw-eth0" Dec 13 01:28:02.373833 containerd[1811]: 2024-12-13 01:28:02.334 [INFO][4943] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2420f2fd95e3c2d567d405034838b78e4332347381e3ebe733f8fab598e17825" Namespace="calico-system" Pod="csi-node-driver-8xjpw" WorkloadEndpoint="ci--4081.2.1--a--a2790899e3-k8s-csi--node--driver--8xjpw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--a2790899e3-k8s-csi--node--driver--8xjpw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"31a49e50-08a6-4d91-bf9f-b4d8e0e1e065", ResourceVersion:"791", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 27, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-a2790899e3", ContainerID:"", Pod:"csi-node-driver-8xjpw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.79.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali08417ee6a5c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:02.373833 containerd[1811]: 2024-12-13 01:28:02.334 [INFO][4943] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.79.66/32] ContainerID="2420f2fd95e3c2d567d405034838b78e4332347381e3ebe733f8fab598e17825" Namespace="calico-system" Pod="csi-node-driver-8xjpw" WorkloadEndpoint="ci--4081.2.1--a--a2790899e3-k8s-csi--node--driver--8xjpw-eth0" Dec 13 01:28:02.373833 containerd[1811]: 2024-12-13 01:28:02.334 [INFO][4943] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali08417ee6a5c ContainerID="2420f2fd95e3c2d567d405034838b78e4332347381e3ebe733f8fab598e17825" Namespace="calico-system" Pod="csi-node-driver-8xjpw" WorkloadEndpoint="ci--4081.2.1--a--a2790899e3-k8s-csi--node--driver--8xjpw-eth0" Dec 13 01:28:02.373833 containerd[1811]: 2024-12-13 01:28:02.340 [INFO][4943] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2420f2fd95e3c2d567d405034838b78e4332347381e3ebe733f8fab598e17825" Namespace="calico-system" Pod="csi-node-driver-8xjpw" WorkloadEndpoint="ci--4081.2.1--a--a2790899e3-k8s-csi--node--driver--8xjpw-eth0" Dec 13 01:28:02.373833 containerd[1811]: 2024-12-13 01:28:02.343 [INFO][4943] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2420f2fd95e3c2d567d405034838b78e4332347381e3ebe733f8fab598e17825" Namespace="calico-system" Pod="csi-node-driver-8xjpw" WorkloadEndpoint="ci--4081.2.1--a--a2790899e3-k8s-csi--node--driver--8xjpw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--a2790899e3-k8s-csi--node--driver--8xjpw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"31a49e50-08a6-4d91-bf9f-b4d8e0e1e065", ResourceVersion:"791", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 27, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-a2790899e3", ContainerID:"2420f2fd95e3c2d567d405034838b78e4332347381e3ebe733f8fab598e17825", Pod:"csi-node-driver-8xjpw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.79.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali08417ee6a5c", MAC:"82:f8:b1:0b:d4:87", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:02.373833 containerd[1811]: 2024-12-13 01:28:02.365 [INFO][4943] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2420f2fd95e3c2d567d405034838b78e4332347381e3ebe733f8fab598e17825" Namespace="calico-system" Pod="csi-node-driver-8xjpw" WorkloadEndpoint="ci--4081.2.1--a--a2790899e3-k8s-csi--node--driver--8xjpw-eth0" Dec 13 01:28:02.386787 systemd-networkd[1382]: cali1097904c398: Link UP Dec 13 01:28:02.386951 systemd-networkd[1382]: cali1097904c398: Gained carrier Dec 13 01:28:02.408673 containerd[1811]: 2024-12-13 01:28:02.236 [INFO][4934] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.1--a--a2790899e3-k8s-calico--apiserver--8659b5f7c7--t7g7q-eth0 calico-apiserver-8659b5f7c7- calico-apiserver 630b6a63-ec70-4f62-be70-0062014125b5 792 0 2024-12-13 01:27:37 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8659b5f7c7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.2.1-a-a2790899e3 calico-apiserver-8659b5f7c7-t7g7q eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1097904c398 [] []}} ContainerID="809aa90de02de60f809bcee2762bfd2e3947ea5d863284da874195894e51a98d" Namespace="calico-apiserver" Pod="calico-apiserver-8659b5f7c7-t7g7q" WorkloadEndpoint="ci--4081.2.1--a--a2790899e3-k8s-calico--apiserver--8659b5f7c7--t7g7q-" Dec 13 01:28:02.408673 containerd[1811]: 2024-12-13 01:28:02.236 [INFO][4934] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="809aa90de02de60f809bcee2762bfd2e3947ea5d863284da874195894e51a98d" Namespace="calico-apiserver" Pod="calico-apiserver-8659b5f7c7-t7g7q" WorkloadEndpoint="ci--4081.2.1--a--a2790899e3-k8s-calico--apiserver--8659b5f7c7--t7g7q-eth0" Dec 13 01:28:02.408673 containerd[1811]: 2024-12-13 01:28:02.295 [INFO][4977] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="809aa90de02de60f809bcee2762bfd2e3947ea5d863284da874195894e51a98d" HandleID="k8s-pod-network.809aa90de02de60f809bcee2762bfd2e3947ea5d863284da874195894e51a98d" Workload="ci--4081.2.1--a--a2790899e3-k8s-calico--apiserver--8659b5f7c7--t7g7q-eth0" Dec 13 01:28:02.408673 containerd[1811]: 2024-12-13 01:28:02.307 [INFO][4977] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="809aa90de02de60f809bcee2762bfd2e3947ea5d863284da874195894e51a98d" HandleID="k8s-pod-network.809aa90de02de60f809bcee2762bfd2e3947ea5d863284da874195894e51a98d" Workload="ci--4081.2.1--a--a2790899e3-k8s-calico--apiserver--8659b5f7c7--t7g7q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003187b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.2.1-a-a2790899e3", "pod":"calico-apiserver-8659b5f7c7-t7g7q", "timestamp":"2024-12-13 01:28:02.29561347 +0000 UTC"}, Hostname:"ci-4081.2.1-a-a2790899e3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:28:02.408673 containerd[1811]: 2024-12-13 01:28:02.307 [INFO][4977] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:02.408673 containerd[1811]: 2024-12-13 01:28:02.332 [INFO][4977] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:02.408673 containerd[1811]: 2024-12-13 01:28:02.333 [INFO][4977] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.1-a-a2790899e3' Dec 13 01:28:02.408673 containerd[1811]: 2024-12-13 01:28:02.335 [INFO][4977] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.809aa90de02de60f809bcee2762bfd2e3947ea5d863284da874195894e51a98d" host="ci-4081.2.1-a-a2790899e3" Dec 13 01:28:02.408673 containerd[1811]: 2024-12-13 01:28:02.342 [INFO][4977] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.1-a-a2790899e3" Dec 13 01:28:02.408673 containerd[1811]: 2024-12-13 01:28:02.346 [INFO][4977] ipam/ipam.go 489: Trying affinity for 192.168.79.64/26 host="ci-4081.2.1-a-a2790899e3" Dec 13 01:28:02.408673 containerd[1811]: 2024-12-13 01:28:02.348 [INFO][4977] ipam/ipam.go 155: Attempting to load block cidr=192.168.79.64/26 host="ci-4081.2.1-a-a2790899e3" Dec 13 01:28:02.408673 containerd[1811]: 2024-12-13 01:28:02.350 [INFO][4977] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.79.64/26 host="ci-4081.2.1-a-a2790899e3" Dec 13 01:28:02.408673 containerd[1811]: 2024-12-13 01:28:02.350 [INFO][4977] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.79.64/26 handle="k8s-pod-network.809aa90de02de60f809bcee2762bfd2e3947ea5d863284da874195894e51a98d" host="ci-4081.2.1-a-a2790899e3" Dec 13 01:28:02.408673 containerd[1811]: 2024-12-13 01:28:02.352 [INFO][4977] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.809aa90de02de60f809bcee2762bfd2e3947ea5d863284da874195894e51a98d Dec 13 01:28:02.408673 containerd[1811]: 2024-12-13 01:28:02.368 [INFO][4977] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.79.64/26 handle="k8s-pod-network.809aa90de02de60f809bcee2762bfd2e3947ea5d863284da874195894e51a98d" host="ci-4081.2.1-a-a2790899e3" Dec 13 01:28:02.408673 containerd[1811]: 2024-12-13 01:28:02.377 [INFO][4977] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.79.67/26] block=192.168.79.64/26 handle="k8s-pod-network.809aa90de02de60f809bcee2762bfd2e3947ea5d863284da874195894e51a98d" host="ci-4081.2.1-a-a2790899e3" Dec 13 01:28:02.408673 containerd[1811]: 2024-12-13 01:28:02.377 [INFO][4977] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.79.67/26] handle="k8s-pod-network.809aa90de02de60f809bcee2762bfd2e3947ea5d863284da874195894e51a98d" host="ci-4081.2.1-a-a2790899e3" Dec 13 01:28:02.408673 containerd[1811]: 2024-12-13 01:28:02.377 [INFO][4977] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:02.408673 containerd[1811]: 2024-12-13 01:28:02.377 [INFO][4977] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.79.67/26] IPv6=[] ContainerID="809aa90de02de60f809bcee2762bfd2e3947ea5d863284da874195894e51a98d" HandleID="k8s-pod-network.809aa90de02de60f809bcee2762bfd2e3947ea5d863284da874195894e51a98d" Workload="ci--4081.2.1--a--a2790899e3-k8s-calico--apiserver--8659b5f7c7--t7g7q-eth0" Dec 13 01:28:02.409779 containerd[1811]: 2024-12-13 01:28:02.383 [INFO][4934] cni-plugin/k8s.go 386: Populated endpoint ContainerID="809aa90de02de60f809bcee2762bfd2e3947ea5d863284da874195894e51a98d" Namespace="calico-apiserver" Pod="calico-apiserver-8659b5f7c7-t7g7q" WorkloadEndpoint="ci--4081.2.1--a--a2790899e3-k8s-calico--apiserver--8659b5f7c7--t7g7q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--a2790899e3-k8s-calico--apiserver--8659b5f7c7--t7g7q-eth0", GenerateName:"calico-apiserver-8659b5f7c7-", Namespace:"calico-apiserver", SelfLink:"", UID:"630b6a63-ec70-4f62-be70-0062014125b5", ResourceVersion:"792", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 27, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8659b5f7c7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-a2790899e3", ContainerID:"", Pod:"calico-apiserver-8659b5f7c7-t7g7q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.79.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1097904c398", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:02.409779 containerd[1811]: 2024-12-13 01:28:02.384 [INFO][4934] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.79.67/32] ContainerID="809aa90de02de60f809bcee2762bfd2e3947ea5d863284da874195894e51a98d" Namespace="calico-apiserver" Pod="calico-apiserver-8659b5f7c7-t7g7q" WorkloadEndpoint="ci--4081.2.1--a--a2790899e3-k8s-calico--apiserver--8659b5f7c7--t7g7q-eth0" Dec 13 01:28:02.409779 containerd[1811]: 2024-12-13 01:28:02.384 [INFO][4934] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1097904c398 ContainerID="809aa90de02de60f809bcee2762bfd2e3947ea5d863284da874195894e51a98d" Namespace="calico-apiserver" Pod="calico-apiserver-8659b5f7c7-t7g7q" WorkloadEndpoint="ci--4081.2.1--a--a2790899e3-k8s-calico--apiserver--8659b5f7c7--t7g7q-eth0" Dec 13 01:28:02.409779 containerd[1811]: 2024-12-13 01:28:02.386 [INFO][4934] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="809aa90de02de60f809bcee2762bfd2e3947ea5d863284da874195894e51a98d" Namespace="calico-apiserver" Pod="calico-apiserver-8659b5f7c7-t7g7q" WorkloadEndpoint="ci--4081.2.1--a--a2790899e3-k8s-calico--apiserver--8659b5f7c7--t7g7q-eth0" Dec 13 01:28:02.409779 containerd[1811]: 2024-12-13 01:28:02.386 [INFO][4934] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="809aa90de02de60f809bcee2762bfd2e3947ea5d863284da874195894e51a98d" Namespace="calico-apiserver" Pod="calico-apiserver-8659b5f7c7-t7g7q" WorkloadEndpoint="ci--4081.2.1--a--a2790899e3-k8s-calico--apiserver--8659b5f7c7--t7g7q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--a2790899e3-k8s-calico--apiserver--8659b5f7c7--t7g7q-eth0", GenerateName:"calico-apiserver-8659b5f7c7-", Namespace:"calico-apiserver", SelfLink:"", UID:"630b6a63-ec70-4f62-be70-0062014125b5", ResourceVersion:"792", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 27, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8659b5f7c7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-a2790899e3", ContainerID:"809aa90de02de60f809bcee2762bfd2e3947ea5d863284da874195894e51a98d", Pod:"calico-apiserver-8659b5f7c7-t7g7q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.79.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1097904c398", MAC:"86:19:22:01:19:24", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:02.409779 containerd[1811]: 2024-12-13 01:28:02.404 [INFO][4934] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="809aa90de02de60f809bcee2762bfd2e3947ea5d863284da874195894e51a98d" Namespace="calico-apiserver" Pod="calico-apiserver-8659b5f7c7-t7g7q" WorkloadEndpoint="ci--4081.2.1--a--a2790899e3-k8s-calico--apiserver--8659b5f7c7--t7g7q-eth0" Dec 13 01:28:02.415887 containerd[1811]: time="2024-12-13T01:28:02.415469712Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:28:02.415887 containerd[1811]: time="2024-12-13T01:28:02.415528032Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:28:02.415887 containerd[1811]: time="2024-12-13T01:28:02.415542752Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:28:02.415887 containerd[1811]: time="2024-12-13T01:28:02.415632032Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:28:02.448140 containerd[1811]: time="2024-12-13T01:28:02.448033097Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:28:02.448476 containerd[1811]: time="2024-12-13T01:28:02.448401498Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:28:02.448525 containerd[1811]: time="2024-12-13T01:28:02.448491858Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:28:02.449259 containerd[1811]: time="2024-12-13T01:28:02.448888019Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:28:02.460790 containerd[1811]: time="2024-12-13T01:28:02.460678003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8xjpw,Uid:31a49e50-08a6-4d91-bf9f-b4d8e0e1e065,Namespace:calico-system,Attempt:1,} returns sandbox id \"2420f2fd95e3c2d567d405034838b78e4332347381e3ebe733f8fab598e17825\"" Dec 13 01:28:02.489645 containerd[1811]: time="2024-12-13T01:28:02.489591101Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Dec 13 01:28:02.509268 containerd[1811]: time="2024-12-13T01:28:02.509101021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8659b5f7c7-t7g7q,Uid:630b6a63-ec70-4f62-be70-0062014125b5,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"809aa90de02de60f809bcee2762bfd2e3947ea5d863284da874195894e51a98d\"" Dec 13 01:28:02.569285 systemd-networkd[1382]: calidbf8d2ccca6: Gained IPv6LL Dec 13 01:28:02.585009 containerd[1811]: time="2024-12-13T01:28:02.584897373Z" level=info msg="StopPodSandbox for \"c27a2aa3797a49e98e5551810ad3c42db19c64ec063b00abe0c705b796e9c55e\"" Dec 13 01:28:02.688907 containerd[1811]: 2024-12-13 01:28:02.636 [INFO][5108] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c27a2aa3797a49e98e5551810ad3c42db19c64ec063b00abe0c705b796e9c55e" Dec 13 01:28:02.688907 containerd[1811]: 2024-12-13 01:28:02.636 [INFO][5108] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c27a2aa3797a49e98e5551810ad3c42db19c64ec063b00abe0c705b796e9c55e" iface="eth0" netns="/var/run/netns/cni-d181838c-9b61-e0ff-45e3-ad2927359988" Dec 13 01:28:02.688907 containerd[1811]: 2024-12-13 01:28:02.637 [INFO][5108] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c27a2aa3797a49e98e5551810ad3c42db19c64ec063b00abe0c705b796e9c55e" iface="eth0" netns="/var/run/netns/cni-d181838c-9b61-e0ff-45e3-ad2927359988" Dec 13 01:28:02.688907 containerd[1811]: 2024-12-13 01:28:02.637 [INFO][5108] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c27a2aa3797a49e98e5551810ad3c42db19c64ec063b00abe0c705b796e9c55e" iface="eth0" netns="/var/run/netns/cni-d181838c-9b61-e0ff-45e3-ad2927359988" Dec 13 01:28:02.688907 containerd[1811]: 2024-12-13 01:28:02.637 [INFO][5108] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c27a2aa3797a49e98e5551810ad3c42db19c64ec063b00abe0c705b796e9c55e" Dec 13 01:28:02.688907 containerd[1811]: 2024-12-13 01:28:02.637 [INFO][5108] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c27a2aa3797a49e98e5551810ad3c42db19c64ec063b00abe0c705b796e9c55e" Dec 13 01:28:02.688907 containerd[1811]: 2024-12-13 01:28:02.664 [INFO][5114] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c27a2aa3797a49e98e5551810ad3c42db19c64ec063b00abe0c705b796e9c55e" HandleID="k8s-pod-network.c27a2aa3797a49e98e5551810ad3c42db19c64ec063b00abe0c705b796e9c55e" Workload="ci--4081.2.1--a--a2790899e3-k8s-calico--apiserver--8659b5f7c7--mrdjt-eth0" Dec 13 01:28:02.688907 containerd[1811]: 2024-12-13 01:28:02.664 [INFO][5114] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:02.688907 containerd[1811]: 2024-12-13 01:28:02.664 [INFO][5114] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:02.688907 containerd[1811]: 2024-12-13 01:28:02.672 [WARNING][5114] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c27a2aa3797a49e98e5551810ad3c42db19c64ec063b00abe0c705b796e9c55e" HandleID="k8s-pod-network.c27a2aa3797a49e98e5551810ad3c42db19c64ec063b00abe0c705b796e9c55e" Workload="ci--4081.2.1--a--a2790899e3-k8s-calico--apiserver--8659b5f7c7--mrdjt-eth0" Dec 13 01:28:02.688907 containerd[1811]: 2024-12-13 01:28:02.672 [INFO][5114] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c27a2aa3797a49e98e5551810ad3c42db19c64ec063b00abe0c705b796e9c55e" HandleID="k8s-pod-network.c27a2aa3797a49e98e5551810ad3c42db19c64ec063b00abe0c705b796e9c55e" Workload="ci--4081.2.1--a--a2790899e3-k8s-calico--apiserver--8659b5f7c7--mrdjt-eth0" Dec 13 01:28:02.688907 containerd[1811]: 2024-12-13 01:28:02.679 [INFO][5114] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:02.688907 containerd[1811]: 2024-12-13 01:28:02.684 [INFO][5108] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c27a2aa3797a49e98e5551810ad3c42db19c64ec063b00abe0c705b796e9c55e" Dec 13 01:28:02.688907 containerd[1811]: time="2024-12-13T01:28:02.688593382Z" level=info msg="TearDown network for sandbox \"c27a2aa3797a49e98e5551810ad3c42db19c64ec063b00abe0c705b796e9c55e\" successfully" Dec 13 01:28:02.688907 containerd[1811]: time="2024-12-13T01:28:02.688624862Z" level=info msg="StopPodSandbox for \"c27a2aa3797a49e98e5551810ad3c42db19c64ec063b00abe0c705b796e9c55e\" returns successfully" Dec 13 01:28:02.694331 containerd[1811]: time="2024-12-13T01:28:02.689402184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8659b5f7c7-mrdjt,Uid:c28ab2d8-49cf-45a5-b55c-a48ac6236be7,Namespace:calico-apiserver,Attempt:1,}" Dec 13 01:28:02.694909 systemd[1]: run-netns-cni\x2dd181838c\x2d9b61\x2de0ff\x2d45e3\x2dad2927359988.mount: Deactivated successfully. Dec 13 01:28:02.863623 systemd-networkd[1382]: cali161aff0def4: Link UP Dec 13 01:28:02.864484 systemd-networkd[1382]: cali161aff0def4: Gained carrier Dec 13 01:28:02.880053 kubelet[3412]: I1213 01:28:02.879908 3412 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-h7l4p" podStartSLOduration=32.879848608 podStartE2EDuration="32.879848608s" podCreationTimestamp="2024-12-13 01:27:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:28:02.83658728 +0000 UTC m=+46.406124568" watchObservedRunningTime="2024-12-13 01:28:02.879848608 +0000 UTC m=+46.449385816" Dec 13 01:28:02.883312 containerd[1811]: 2024-12-13 01:28:02.766 [INFO][5121] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.1--a--a2790899e3-k8s-calico--apiserver--8659b5f7c7--mrdjt-eth0 calico-apiserver-8659b5f7c7- calico-apiserver c28ab2d8-49cf-45a5-b55c-a48ac6236be7 805 0 2024-12-13 01:27:37 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8659b5f7c7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.2.1-a-a2790899e3 calico-apiserver-8659b5f7c7-mrdjt eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali161aff0def4 [] []}} ContainerID="f6fbde65d4fa4a974094aecc3105c57764e608aaa97d44e46d24697b0765d720" Namespace="calico-apiserver" Pod="calico-apiserver-8659b5f7c7-mrdjt" WorkloadEndpoint="ci--4081.2.1--a--a2790899e3-k8s-calico--apiserver--8659b5f7c7--mrdjt-" Dec 13 01:28:02.883312 containerd[1811]: 2024-12-13 01:28:02.766 [INFO][5121] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f6fbde65d4fa4a974094aecc3105c57764e608aaa97d44e46d24697b0765d720" Namespace="calico-apiserver" Pod="calico-apiserver-8659b5f7c7-mrdjt" WorkloadEndpoint="ci--4081.2.1--a--a2790899e3-k8s-calico--apiserver--8659b5f7c7--mrdjt-eth0" Dec 13 01:28:02.883312 containerd[1811]: 2024-12-13 01:28:02.794 [INFO][5132] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f6fbde65d4fa4a974094aecc3105c57764e608aaa97d44e46d24697b0765d720" HandleID="k8s-pod-network.f6fbde65d4fa4a974094aecc3105c57764e608aaa97d44e46d24697b0765d720" Workload="ci--4081.2.1--a--a2790899e3-k8s-calico--apiserver--8659b5f7c7--mrdjt-eth0" Dec 13 01:28:02.883312 containerd[1811]: 2024-12-13 01:28:02.810 [INFO][5132] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f6fbde65d4fa4a974094aecc3105c57764e608aaa97d44e46d24697b0765d720" HandleID="k8s-pod-network.f6fbde65d4fa4a974094aecc3105c57764e608aaa97d44e46d24697b0765d720" Workload="ci--4081.2.1--a--a2790899e3-k8s-calico--apiserver--8659b5f7c7--mrdjt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004def0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.2.1-a-a2790899e3", "pod":"calico-apiserver-8659b5f7c7-mrdjt", "timestamp":"2024-12-13 01:28:02.794812476 +0000 UTC"}, Hostname:"ci-4081.2.1-a-a2790899e3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:28:02.883312 containerd[1811]: 2024-12-13 01:28:02.810 [INFO][5132] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:02.883312 containerd[1811]: 2024-12-13 01:28:02.810 [INFO][5132] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:02.883312 containerd[1811]: 2024-12-13 01:28:02.811 [INFO][5132] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.1-a-a2790899e3' Dec 13 01:28:02.883312 containerd[1811]: 2024-12-13 01:28:02.814 [INFO][5132] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f6fbde65d4fa4a974094aecc3105c57764e608aaa97d44e46d24697b0765d720" host="ci-4081.2.1-a-a2790899e3" Dec 13 01:28:02.883312 containerd[1811]: 2024-12-13 01:28:02.820 [INFO][5132] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.1-a-a2790899e3" Dec 13 01:28:02.883312 containerd[1811]: 2024-12-13 01:28:02.824 [INFO][5132] ipam/ipam.go 489: Trying affinity for 192.168.79.64/26 host="ci-4081.2.1-a-a2790899e3" Dec 13 01:28:02.883312 containerd[1811]: 2024-12-13 01:28:02.830 [INFO][5132] ipam/ipam.go 155: Attempting to load block cidr=192.168.79.64/26 host="ci-4081.2.1-a-a2790899e3" Dec 13 01:28:02.883312 containerd[1811]: 2024-12-13 01:28:02.837 [INFO][5132] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.79.64/26 host="ci-4081.2.1-a-a2790899e3" Dec 13 01:28:02.883312 containerd[1811]: 2024-12-13 01:28:02.837 [INFO][5132] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.79.64/26 handle="k8s-pod-network.f6fbde65d4fa4a974094aecc3105c57764e608aaa97d44e46d24697b0765d720" host="ci-4081.2.1-a-a2790899e3" Dec 13 01:28:02.883312 containerd[1811]: 2024-12-13 01:28:02.839 [INFO][5132] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f6fbde65d4fa4a974094aecc3105c57764e608aaa97d44e46d24697b0765d720 Dec 13 01:28:02.883312 containerd[1811]: 2024-12-13 01:28:02.851 [INFO][5132] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.79.64/26 handle="k8s-pod-network.f6fbde65d4fa4a974094aecc3105c57764e608aaa97d44e46d24697b0765d720" host="ci-4081.2.1-a-a2790899e3" Dec 13 01:28:02.883312 containerd[1811]: 2024-12-13 01:28:02.858 [INFO][5132] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.79.68/26] block=192.168.79.64/26 handle="k8s-pod-network.f6fbde65d4fa4a974094aecc3105c57764e608aaa97d44e46d24697b0765d720" host="ci-4081.2.1-a-a2790899e3" Dec 13 01:28:02.883312 containerd[1811]: 2024-12-13 01:28:02.858 [INFO][5132] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.79.68/26] handle="k8s-pod-network.f6fbde65d4fa4a974094aecc3105c57764e608aaa97d44e46d24697b0765d720" host="ci-4081.2.1-a-a2790899e3" Dec 13 01:28:02.883312 containerd[1811]: 2024-12-13 01:28:02.858 [INFO][5132] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:02.883312 containerd[1811]: 2024-12-13 01:28:02.858 [INFO][5132] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.79.68/26] IPv6=[] ContainerID="f6fbde65d4fa4a974094aecc3105c57764e608aaa97d44e46d24697b0765d720" HandleID="k8s-pod-network.f6fbde65d4fa4a974094aecc3105c57764e608aaa97d44e46d24697b0765d720" Workload="ci--4081.2.1--a--a2790899e3-k8s-calico--apiserver--8659b5f7c7--mrdjt-eth0" Dec 13 01:28:02.884328 containerd[1811]: 2024-12-13 01:28:02.861 [INFO][5121] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f6fbde65d4fa4a974094aecc3105c57764e608aaa97d44e46d24697b0765d720" Namespace="calico-apiserver" Pod="calico-apiserver-8659b5f7c7-mrdjt" WorkloadEndpoint="ci--4081.2.1--a--a2790899e3-k8s-calico--apiserver--8659b5f7c7--mrdjt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--a2790899e3-k8s-calico--apiserver--8659b5f7c7--mrdjt-eth0", GenerateName:"calico-apiserver-8659b5f7c7-", Namespace:"calico-apiserver", SelfLink:"", UID:"c28ab2d8-49cf-45a5-b55c-a48ac6236be7", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 27, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8659b5f7c7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-a2790899e3", ContainerID:"", Pod:"calico-apiserver-8659b5f7c7-mrdjt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.79.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali161aff0def4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:02.884328 containerd[1811]: 2024-12-13 01:28:02.861 [INFO][5121] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.79.68/32] ContainerID="f6fbde65d4fa4a974094aecc3105c57764e608aaa97d44e46d24697b0765d720" Namespace="calico-apiserver" Pod="calico-apiserver-8659b5f7c7-mrdjt" WorkloadEndpoint="ci--4081.2.1--a--a2790899e3-k8s-calico--apiserver--8659b5f7c7--mrdjt-eth0" Dec 13 01:28:02.884328 containerd[1811]: 2024-12-13 01:28:02.861 [INFO][5121] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali161aff0def4 ContainerID="f6fbde65d4fa4a974094aecc3105c57764e608aaa97d44e46d24697b0765d720" Namespace="calico-apiserver" Pod="calico-apiserver-8659b5f7c7-mrdjt" WorkloadEndpoint="ci--4081.2.1--a--a2790899e3-k8s-calico--apiserver--8659b5f7c7--mrdjt-eth0" Dec 13 01:28:02.884328 containerd[1811]: 2024-12-13 01:28:02.864 [INFO][5121] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f6fbde65d4fa4a974094aecc3105c57764e608aaa97d44e46d24697b0765d720" Namespace="calico-apiserver" Pod="calico-apiserver-8659b5f7c7-mrdjt" WorkloadEndpoint="ci--4081.2.1--a--a2790899e3-k8s-calico--apiserver--8659b5f7c7--mrdjt-eth0" Dec 13 01:28:02.884328 containerd[1811]: 2024-12-13 01:28:02.865 [INFO][5121] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f6fbde65d4fa4a974094aecc3105c57764e608aaa97d44e46d24697b0765d720" Namespace="calico-apiserver" Pod="calico-apiserver-8659b5f7c7-mrdjt" WorkloadEndpoint="ci--4081.2.1--a--a2790899e3-k8s-calico--apiserver--8659b5f7c7--mrdjt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--a2790899e3-k8s-calico--apiserver--8659b5f7c7--mrdjt-eth0", GenerateName:"calico-apiserver-8659b5f7c7-", Namespace:"calico-apiserver", SelfLink:"", UID:"c28ab2d8-49cf-45a5-b55c-a48ac6236be7", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 27, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8659b5f7c7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-a2790899e3", ContainerID:"f6fbde65d4fa4a974094aecc3105c57764e608aaa97d44e46d24697b0765d720", Pod:"calico-apiserver-8659b5f7c7-mrdjt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.79.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali161aff0def4", MAC:"5a:f9:2f:24:2c:9c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:02.884328 containerd[1811]: 2024-12-13 01:28:02.880 [INFO][5121] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f6fbde65d4fa4a974094aecc3105c57764e608aaa97d44e46d24697b0765d720" Namespace="calico-apiserver" Pod="calico-apiserver-8659b5f7c7-mrdjt" WorkloadEndpoint="ci--4081.2.1--a--a2790899e3-k8s-calico--apiserver--8659b5f7c7--mrdjt-eth0" Dec 13 01:28:02.908812 containerd[1811]: time="2024-12-13T01:28:02.908676986Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:28:02.908812 containerd[1811]: time="2024-12-13T01:28:02.908748666Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:28:02.908812 containerd[1811]: time="2024-12-13T01:28:02.908763346Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:28:02.909097 containerd[1811]: time="2024-12-13T01:28:02.908879826Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:28:02.952404 containerd[1811]: time="2024-12-13T01:28:02.952210593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8659b5f7c7-mrdjt,Uid:c28ab2d8-49cf-45a5-b55c-a48ac6236be7,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"f6fbde65d4fa4a974094aecc3105c57764e608aaa97d44e46d24697b0765d720\"" Dec 13 01:28:03.145317 systemd-networkd[1382]: vxlan.calico: Gained IPv6LL Dec 13 01:28:03.465660 systemd-networkd[1382]: cali1097904c398: Gained IPv6LL Dec 13 01:28:03.810063 containerd[1811]: time="2024-12-13T01:28:03.810009722Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:03.811846 containerd[1811]: time="2024-12-13T01:28:03.811694485Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Dec 13 01:28:03.815043 containerd[1811]: time="2024-12-13T01:28:03.814987332Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:03.819335 containerd[1811]: time="2024-12-13T01:28:03.818980220Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:03.819885 containerd[1811]: time="2024-12-13T01:28:03.819585821Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 1.329825039s" Dec 13 01:28:03.819885 containerd[1811]: time="2024-12-13T01:28:03.819617701Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Dec 13 01:28:03.823259 containerd[1811]: time="2024-12-13T01:28:03.823234628Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 01:28:03.824574 containerd[1811]: time="2024-12-13T01:28:03.823424349Z" level=info msg="CreateContainer within sandbox \"2420f2fd95e3c2d567d405034838b78e4332347381e3ebe733f8fab598e17825\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Dec 13 01:28:03.867485 containerd[1811]: time="2024-12-13T01:28:03.867446997Z" level=info msg="CreateContainer within sandbox \"2420f2fd95e3c2d567d405034838b78e4332347381e3ebe733f8fab598e17825\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"1d557e6808282e9b9b45331479809a1552be6d30d80de399629404dde660c234\"" Dec 13 01:28:03.869663 containerd[1811]: time="2024-12-13T01:28:03.868887800Z" level=info msg="StartContainer for \"1d557e6808282e9b9b45331479809a1552be6d30d80de399629404dde660c234\"" Dec 13 01:28:03.963935 containerd[1811]: time="2024-12-13T01:28:03.963894592Z" level=info msg="StartContainer for \"1d557e6808282e9b9b45331479809a1552be6d30d80de399629404dde660c234\" returns successfully" Dec 13 01:28:04.169280 systemd-networkd[1382]: cali08417ee6a5c: Gained IPv6LL Dec 13 01:28:04.361363 systemd-networkd[1382]: cali161aff0def4: Gained IPv6LL Dec 13 01:28:04.583101 containerd[1811]: time="2024-12-13T01:28:04.582492238Z" level=info msg="StopPodSandbox for \"fc5fef736f525172af5a285d617822b0d39d8c6a76efa09d2182ea831f81aaa8\"" Dec 13 01:28:04.673968 containerd[1811]: 2024-12-13 01:28:04.631 [INFO][5244] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fc5fef736f525172af5a285d617822b0d39d8c6a76efa09d2182ea831f81aaa8" Dec 13 01:28:04.673968 containerd[1811]: 2024-12-13 01:28:04.632 [INFO][5244] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fc5fef736f525172af5a285d617822b0d39d8c6a76efa09d2182ea831f81aaa8" iface="eth0" netns="/var/run/netns/cni-4d1b67aa-79c6-d72a-23aa-f04d3be6d77b" Dec 13 01:28:04.673968 containerd[1811]: 2024-12-13 01:28:04.632 [INFO][5244] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fc5fef736f525172af5a285d617822b0d39d8c6a76efa09d2182ea831f81aaa8" iface="eth0" netns="/var/run/netns/cni-4d1b67aa-79c6-d72a-23aa-f04d3be6d77b" Dec 13 01:28:04.673968 containerd[1811]: 2024-12-13 01:28:04.633 [INFO][5244] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fc5fef736f525172af5a285d617822b0d39d8c6a76efa09d2182ea831f81aaa8" iface="eth0" netns="/var/run/netns/cni-4d1b67aa-79c6-d72a-23aa-f04d3be6d77b" Dec 13 01:28:04.673968 containerd[1811]: 2024-12-13 01:28:04.633 [INFO][5244] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fc5fef736f525172af5a285d617822b0d39d8c6a76efa09d2182ea831f81aaa8" Dec 13 01:28:04.673968 containerd[1811]: 2024-12-13 01:28:04.633 [INFO][5244] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fc5fef736f525172af5a285d617822b0d39d8c6a76efa09d2182ea831f81aaa8" Dec 13 01:28:04.673968 containerd[1811]: 2024-12-13 01:28:04.656 [INFO][5250] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fc5fef736f525172af5a285d617822b0d39d8c6a76efa09d2182ea831f81aaa8" HandleID="k8s-pod-network.fc5fef736f525172af5a285d617822b0d39d8c6a76efa09d2182ea831f81aaa8" Workload="ci--4081.2.1--a--a2790899e3-k8s-calico--kube--controllers--64b868bd8--nmv69-eth0" Dec 13 01:28:04.673968 containerd[1811]: 2024-12-13 01:28:04.657 [INFO][5250] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:04.673968 containerd[1811]: 2024-12-13 01:28:04.657 [INFO][5250] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:04.673968 containerd[1811]: 2024-12-13 01:28:04.668 [WARNING][5250] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fc5fef736f525172af5a285d617822b0d39d8c6a76efa09d2182ea831f81aaa8" HandleID="k8s-pod-network.fc5fef736f525172af5a285d617822b0d39d8c6a76efa09d2182ea831f81aaa8" Workload="ci--4081.2.1--a--a2790899e3-k8s-calico--kube--controllers--64b868bd8--nmv69-eth0" Dec 13 01:28:04.673968 containerd[1811]: 2024-12-13 01:28:04.668 [INFO][5250] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fc5fef736f525172af5a285d617822b0d39d8c6a76efa09d2182ea831f81aaa8" HandleID="k8s-pod-network.fc5fef736f525172af5a285d617822b0d39d8c6a76efa09d2182ea831f81aaa8" Workload="ci--4081.2.1--a--a2790899e3-k8s-calico--kube--controllers--64b868bd8--nmv69-eth0" Dec 13 01:28:04.673968 containerd[1811]: 2024-12-13 01:28:04.670 [INFO][5250] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:04.673968 containerd[1811]: 2024-12-13 01:28:04.672 [INFO][5244] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fc5fef736f525172af5a285d617822b0d39d8c6a76efa09d2182ea831f81aaa8" Dec 13 01:28:04.674503 containerd[1811]: time="2024-12-13T01:28:04.674310903Z" level=info msg="TearDown network for sandbox \"fc5fef736f525172af5a285d617822b0d39d8c6a76efa09d2182ea831f81aaa8\" successfully" Dec 13 01:28:04.674503 containerd[1811]: time="2024-12-13T01:28:04.674347823Z" level=info msg="StopPodSandbox for \"fc5fef736f525172af5a285d617822b0d39d8c6a76efa09d2182ea831f81aaa8\" returns successfully" Dec 13 01:28:04.675489 containerd[1811]: time="2024-12-13T01:28:04.675050184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64b868bd8-nmv69,Uid:56a8139e-d217-4073-b397-5d26c40f4540,Namespace:calico-system,Attempt:1,}" Dec 13 01:28:04.681774 systemd[1]: run-netns-cni\x2d4d1b67aa\x2d79c6\x2dd72a\x2d23aa\x2df04d3be6d77b.mount: Deactivated successfully. Dec 13 01:28:04.835890 systemd-networkd[1382]: cali247199f9299: Link UP Dec 13 01:28:04.837393 systemd-networkd[1382]: cali247199f9299: Gained carrier Dec 13 01:28:04.852751 containerd[1811]: 2024-12-13 01:28:04.767 [INFO][5256] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.1--a--a2790899e3-k8s-calico--kube--controllers--64b868bd8--nmv69-eth0 calico-kube-controllers-64b868bd8- calico-system 56a8139e-d217-4073-b397-5d26c40f4540 828 0 2024-12-13 01:27:37 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:64b868bd8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.2.1-a-a2790899e3 calico-kube-controllers-64b868bd8-nmv69 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali247199f9299 [] []}} ContainerID="c57bbb7ed962a5a08f8cfaf9d18c781fd09ba0e341e9632003caffc6a7e49b85" Namespace="calico-system" Pod="calico-kube-controllers-64b868bd8-nmv69" WorkloadEndpoint="ci--4081.2.1--a--a2790899e3-k8s-calico--kube--controllers--64b868bd8--nmv69-" Dec 13 01:28:04.852751 containerd[1811]: 2024-12-13 01:28:04.767 [INFO][5256] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c57bbb7ed962a5a08f8cfaf9d18c781fd09ba0e341e9632003caffc6a7e49b85" Namespace="calico-system" Pod="calico-kube-controllers-64b868bd8-nmv69" WorkloadEndpoint="ci--4081.2.1--a--a2790899e3-k8s-calico--kube--controllers--64b868bd8--nmv69-eth0" Dec 13 01:28:04.852751 containerd[1811]: 2024-12-13 01:28:04.793 [INFO][5267] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c57bbb7ed962a5a08f8cfaf9d18c781fd09ba0e341e9632003caffc6a7e49b85" HandleID="k8s-pod-network.c57bbb7ed962a5a08f8cfaf9d18c781fd09ba0e341e9632003caffc6a7e49b85" Workload="ci--4081.2.1--a--a2790899e3-k8s-calico--kube--controllers--64b868bd8--nmv69-eth0" Dec 13 01:28:04.852751 containerd[1811]: 2024-12-13 01:28:04.803 [INFO][5267] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c57bbb7ed962a5a08f8cfaf9d18c781fd09ba0e341e9632003caffc6a7e49b85" HandleID="k8s-pod-network.c57bbb7ed962a5a08f8cfaf9d18c781fd09ba0e341e9632003caffc6a7e49b85" Workload="ci--4081.2.1--a--a2790899e3-k8s-calico--kube--controllers--64b868bd8--nmv69-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400028c930), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.2.1-a-a2790899e3", "pod":"calico-kube-controllers-64b868bd8-nmv69", "timestamp":"2024-12-13 01:28:04.793220943 +0000 UTC"}, Hostname:"ci-4081.2.1-a-a2790899e3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:28:04.852751 containerd[1811]: 2024-12-13 01:28:04.803 [INFO][5267] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:04.852751 containerd[1811]: 2024-12-13 01:28:04.803 [INFO][5267] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:04.852751 containerd[1811]: 2024-12-13 01:28:04.803 [INFO][5267] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.1-a-a2790899e3' Dec 13 01:28:04.852751 containerd[1811]: 2024-12-13 01:28:04.804 [INFO][5267] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c57bbb7ed962a5a08f8cfaf9d18c781fd09ba0e341e9632003caffc6a7e49b85" host="ci-4081.2.1-a-a2790899e3" Dec 13 01:28:04.852751 containerd[1811]: 2024-12-13 01:28:04.807 [INFO][5267] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.1-a-a2790899e3" Dec 13 01:28:04.852751 containerd[1811]: 2024-12-13 01:28:04.810 [INFO][5267] ipam/ipam.go 489: Trying affinity for 192.168.79.64/26 host="ci-4081.2.1-a-a2790899e3" Dec 13 01:28:04.852751 containerd[1811]: 2024-12-13 01:28:04.812 [INFO][5267] ipam/ipam.go 155: Attempting to load block cidr=192.168.79.64/26 host="ci-4081.2.1-a-a2790899e3" Dec 13 01:28:04.852751 containerd[1811]: 2024-12-13 01:28:04.814 [INFO][5267] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.79.64/26 host="ci-4081.2.1-a-a2790899e3" Dec 13 01:28:04.852751 containerd[1811]: 2024-12-13 01:28:04.814 [INFO][5267] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.79.64/26 handle="k8s-pod-network.c57bbb7ed962a5a08f8cfaf9d18c781fd09ba0e341e9632003caffc6a7e49b85" host="ci-4081.2.1-a-a2790899e3" Dec 13 01:28:04.852751 containerd[1811]: 2024-12-13 01:28:04.815 [INFO][5267] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c57bbb7ed962a5a08f8cfaf9d18c781fd09ba0e341e9632003caffc6a7e49b85 Dec 13 01:28:04.852751 containerd[1811]: 2024-12-13 01:28:04.820 [INFO][5267] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.79.64/26 handle="k8s-pod-network.c57bbb7ed962a5a08f8cfaf9d18c781fd09ba0e341e9632003caffc6a7e49b85" host="ci-4081.2.1-a-a2790899e3" Dec 13 01:28:04.852751 containerd[1811]: 2024-12-13 01:28:04.828 [INFO][5267] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.79.69/26] block=192.168.79.64/26 handle="k8s-pod-network.c57bbb7ed962a5a08f8cfaf9d18c781fd09ba0e341e9632003caffc6a7e49b85" host="ci-4081.2.1-a-a2790899e3" Dec 13 01:28:04.852751 containerd[1811]: 2024-12-13 01:28:04.828 [INFO][5267] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.79.69/26] handle="k8s-pod-network.c57bbb7ed962a5a08f8cfaf9d18c781fd09ba0e341e9632003caffc6a7e49b85" host="ci-4081.2.1-a-a2790899e3" Dec 13 01:28:04.852751 containerd[1811]: 2024-12-13 01:28:04.828 [INFO][5267] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:04.852751 containerd[1811]: 2024-12-13 01:28:04.828 [INFO][5267] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.79.69/26] IPv6=[] ContainerID="c57bbb7ed962a5a08f8cfaf9d18c781fd09ba0e341e9632003caffc6a7e49b85" HandleID="k8s-pod-network.c57bbb7ed962a5a08f8cfaf9d18c781fd09ba0e341e9632003caffc6a7e49b85" Workload="ci--4081.2.1--a--a2790899e3-k8s-calico--kube--controllers--64b868bd8--nmv69-eth0" Dec 13 01:28:04.854071 containerd[1811]: 2024-12-13 01:28:04.832 [INFO][5256] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c57bbb7ed962a5a08f8cfaf9d18c781fd09ba0e341e9632003caffc6a7e49b85" Namespace="calico-system" Pod="calico-kube-controllers-64b868bd8-nmv69" WorkloadEndpoint="ci--4081.2.1--a--a2790899e3-k8s-calico--kube--controllers--64b868bd8--nmv69-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--a2790899e3-k8s-calico--kube--controllers--64b868bd8--nmv69-eth0", GenerateName:"calico-kube-controllers-64b868bd8-", Namespace:"calico-system", SelfLink:"", UID:"56a8139e-d217-4073-b397-5d26c40f4540", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 27, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64b868bd8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-a2790899e3", ContainerID:"", Pod:"calico-kube-controllers-64b868bd8-nmv69", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.79.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali247199f9299", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:04.854071 containerd[1811]: 2024-12-13 01:28:04.832 [INFO][5256] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.79.69/32] ContainerID="c57bbb7ed962a5a08f8cfaf9d18c781fd09ba0e341e9632003caffc6a7e49b85" Namespace="calico-system" Pod="calico-kube-controllers-64b868bd8-nmv69" WorkloadEndpoint="ci--4081.2.1--a--a2790899e3-k8s-calico--kube--controllers--64b868bd8--nmv69-eth0" Dec 13 01:28:04.854071 containerd[1811]: 2024-12-13 01:28:04.832 [INFO][5256] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali247199f9299 ContainerID="c57bbb7ed962a5a08f8cfaf9d18c781fd09ba0e341e9632003caffc6a7e49b85" Namespace="calico-system" Pod="calico-kube-controllers-64b868bd8-nmv69" WorkloadEndpoint="ci--4081.2.1--a--a2790899e3-k8s-calico--kube--controllers--64b868bd8--nmv69-eth0" Dec 13 01:28:04.854071 containerd[1811]: 2024-12-13 01:28:04.836 [INFO][5256] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c57bbb7ed962a5a08f8cfaf9d18c781fd09ba0e341e9632003caffc6a7e49b85" Namespace="calico-system" Pod="calico-kube-controllers-64b868bd8-nmv69" WorkloadEndpoint="ci--4081.2.1--a--a2790899e3-k8s-calico--kube--controllers--64b868bd8--nmv69-eth0" Dec 13 01:28:04.854071 containerd[1811]: 2024-12-13 01:28:04.837 [INFO][5256] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c57bbb7ed962a5a08f8cfaf9d18c781fd09ba0e341e9632003caffc6a7e49b85" Namespace="calico-system" Pod="calico-kube-controllers-64b868bd8-nmv69" WorkloadEndpoint="ci--4081.2.1--a--a2790899e3-k8s-calico--kube--controllers--64b868bd8--nmv69-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--a2790899e3-k8s-calico--kube--controllers--64b868bd8--nmv69-eth0", GenerateName:"calico-kube-controllers-64b868bd8-", Namespace:"calico-system", SelfLink:"", UID:"56a8139e-d217-4073-b397-5d26c40f4540", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 27, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64b868bd8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-a2790899e3", ContainerID:"c57bbb7ed962a5a08f8cfaf9d18c781fd09ba0e341e9632003caffc6a7e49b85", Pod:"calico-kube-controllers-64b868bd8-nmv69", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.79.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali247199f9299", MAC:"e6:2b:48:5f:6d:e6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:04.854071 containerd[1811]: 2024-12-13 01:28:04.850 [INFO][5256] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c57bbb7ed962a5a08f8cfaf9d18c781fd09ba0e341e9632003caffc6a7e49b85" Namespace="calico-system" Pod="calico-kube-controllers-64b868bd8-nmv69" WorkloadEndpoint="ci--4081.2.1--a--a2790899e3-k8s-calico--kube--controllers--64b868bd8--nmv69-eth0" Dec 13 01:28:04.877634 containerd[1811]: time="2024-12-13T01:28:04.876896431Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:28:04.877870 containerd[1811]: time="2024-12-13T01:28:04.877612713Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:28:04.877870 containerd[1811]: time="2024-12-13T01:28:04.877626513Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:28:04.877870 containerd[1811]: time="2024-12-13T01:28:04.877719153Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:28:04.927569 containerd[1811]: time="2024-12-13T01:28:04.927526653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64b868bd8-nmv69,Uid:56a8139e-d217-4073-b397-5d26c40f4540,Namespace:calico-system,Attempt:1,} returns sandbox id \"c57bbb7ed962a5a08f8cfaf9d18c781fd09ba0e341e9632003caffc6a7e49b85\"" Dec 13 01:28:05.581297 containerd[1811]: time="2024-12-13T01:28:05.581239210Z" level=info msg="StopPodSandbox for \"d1236eb14042678ee890b48f5cab70f77b973ea777fa0a3e26de6a1f9fb4a5c2\"" Dec 13 01:28:05.665099 containerd[1811]: 2024-12-13 01:28:05.627 [INFO][5341] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d1236eb14042678ee890b48f5cab70f77b973ea777fa0a3e26de6a1f9fb4a5c2" Dec 13 01:28:05.665099 containerd[1811]: 2024-12-13 01:28:05.628 [INFO][5341] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d1236eb14042678ee890b48f5cab70f77b973ea777fa0a3e26de6a1f9fb4a5c2" iface="eth0" netns="/var/run/netns/cni-0a250efc-edf3-0b71-1d77-cae88d4d77ac" Dec 13 01:28:05.665099 containerd[1811]: 2024-12-13 01:28:05.628 [INFO][5341] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d1236eb14042678ee890b48f5cab70f77b973ea777fa0a3e26de6a1f9fb4a5c2" iface="eth0" netns="/var/run/netns/cni-0a250efc-edf3-0b71-1d77-cae88d4d77ac" Dec 13 01:28:05.665099 containerd[1811]: 2024-12-13 01:28:05.628 [INFO][5341] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d1236eb14042678ee890b48f5cab70f77b973ea777fa0a3e26de6a1f9fb4a5c2" iface="eth0" netns="/var/run/netns/cni-0a250efc-edf3-0b71-1d77-cae88d4d77ac" Dec 13 01:28:05.665099 containerd[1811]: 2024-12-13 01:28:05.628 [INFO][5341] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d1236eb14042678ee890b48f5cab70f77b973ea777fa0a3e26de6a1f9fb4a5c2" Dec 13 01:28:05.665099 containerd[1811]: 2024-12-13 01:28:05.628 [INFO][5341] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d1236eb14042678ee890b48f5cab70f77b973ea777fa0a3e26de6a1f9fb4a5c2" Dec 13 01:28:05.665099 containerd[1811]: 2024-12-13 01:28:05.652 [INFO][5348] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d1236eb14042678ee890b48f5cab70f77b973ea777fa0a3e26de6a1f9fb4a5c2" HandleID="k8s-pod-network.d1236eb14042678ee890b48f5cab70f77b973ea777fa0a3e26de6a1f9fb4a5c2" Workload="ci--4081.2.1--a--a2790899e3-k8s-coredns--76f75df574--7hccp-eth0" Dec 13 01:28:05.665099 containerd[1811]: 2024-12-13 01:28:05.652 [INFO][5348] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:05.665099 containerd[1811]: 2024-12-13 01:28:05.652 [INFO][5348] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:05.665099 containerd[1811]: 2024-12-13 01:28:05.660 [WARNING][5348] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d1236eb14042678ee890b48f5cab70f77b973ea777fa0a3e26de6a1f9fb4a5c2" HandleID="k8s-pod-network.d1236eb14042678ee890b48f5cab70f77b973ea777fa0a3e26de6a1f9fb4a5c2" Workload="ci--4081.2.1--a--a2790899e3-k8s-coredns--76f75df574--7hccp-eth0" Dec 13 01:28:05.665099 containerd[1811]: 2024-12-13 01:28:05.660 [INFO][5348] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d1236eb14042678ee890b48f5cab70f77b973ea777fa0a3e26de6a1f9fb4a5c2" HandleID="k8s-pod-network.d1236eb14042678ee890b48f5cab70f77b973ea777fa0a3e26de6a1f9fb4a5c2" Workload="ci--4081.2.1--a--a2790899e3-k8s-coredns--76f75df574--7hccp-eth0" Dec 13 01:28:05.665099 containerd[1811]: 2024-12-13 01:28:05.662 [INFO][5348] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:05.665099 containerd[1811]: 2024-12-13 01:28:05.663 [INFO][5341] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d1236eb14042678ee890b48f5cab70f77b973ea777fa0a3e26de6a1f9fb4a5c2" Dec 13 01:28:05.666611 containerd[1811]: time="2024-12-13T01:28:05.665765301Z" level=info msg="TearDown network for sandbox \"d1236eb14042678ee890b48f5cab70f77b973ea777fa0a3e26de6a1f9fb4a5c2\" successfully" Dec 13 01:28:05.666611 containerd[1811]: time="2024-12-13T01:28:05.665794861Z" level=info msg="StopPodSandbox for \"d1236eb14042678ee890b48f5cab70f77b973ea777fa0a3e26de6a1f9fb4a5c2\" returns successfully" Dec 13 01:28:05.667606 containerd[1811]: time="2024-12-13T01:28:05.667536864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-7hccp,Uid:a1d94ea8-bc82-4759-bbe6-96e9a3a3933f,Namespace:kube-system,Attempt:1,}" Dec 13 01:28:05.682851 systemd[1]: run-netns-cni\x2d0a250efc\x2dedf3\x2d0b71\x2d1d77\x2dcae88d4d77ac.mount: Deactivated successfully. Dec 13 01:28:05.861742 systemd-networkd[1382]: cali7914d6e9403: Link UP Dec 13 01:28:05.862337 systemd-networkd[1382]: cali7914d6e9403: Gained carrier Dec 13 01:28:05.882898 containerd[1811]: 2024-12-13 01:28:05.763 [INFO][5358] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.1--a--a2790899e3-k8s-coredns--76f75df574--7hccp-eth0 coredns-76f75df574- kube-system a1d94ea8-bc82-4759-bbe6-96e9a3a3933f 836 0 2024-12-13 01:27:30 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.2.1-a-a2790899e3 coredns-76f75df574-7hccp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7914d6e9403 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="faa44f7872a184f7f5b37e8d6877bcc51c2a6936b51e0ec87343451c17bc2350" Namespace="kube-system" Pod="coredns-76f75df574-7hccp" WorkloadEndpoint="ci--4081.2.1--a--a2790899e3-k8s-coredns--76f75df574--7hccp-" Dec 13 01:28:05.882898 containerd[1811]: 2024-12-13 01:28:05.763 [INFO][5358] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="faa44f7872a184f7f5b37e8d6877bcc51c2a6936b51e0ec87343451c17bc2350" Namespace="kube-system" Pod="coredns-76f75df574-7hccp" WorkloadEndpoint="ci--4081.2.1--a--a2790899e3-k8s-coredns--76f75df574--7hccp-eth0" Dec 13 01:28:05.882898 containerd[1811]: 2024-12-13 01:28:05.799 [INFO][5371] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="faa44f7872a184f7f5b37e8d6877bcc51c2a6936b51e0ec87343451c17bc2350" HandleID="k8s-pod-network.faa44f7872a184f7f5b37e8d6877bcc51c2a6936b51e0ec87343451c17bc2350" Workload="ci--4081.2.1--a--a2790899e3-k8s-coredns--76f75df574--7hccp-eth0" Dec 13 01:28:05.882898 containerd[1811]: 2024-12-13 01:28:05.811 [INFO][5371] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="faa44f7872a184f7f5b37e8d6877bcc51c2a6936b51e0ec87343451c17bc2350" HandleID="k8s-pod-network.faa44f7872a184f7f5b37e8d6877bcc51c2a6936b51e0ec87343451c17bc2350" Workload="ci--4081.2.1--a--a2790899e3-k8s-coredns--76f75df574--7hccp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40004314e0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.2.1-a-a2790899e3", "pod":"coredns-76f75df574-7hccp", "timestamp":"2024-12-13 01:28:05.799779971 +0000 UTC"}, Hostname:"ci-4081.2.1-a-a2790899e3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:28:05.882898 containerd[1811]: 2024-12-13 01:28:05.811 [INFO][5371] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:05.882898 containerd[1811]: 2024-12-13 01:28:05.812 [INFO][5371] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:05.882898 containerd[1811]: 2024-12-13 01:28:05.812 [INFO][5371] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.1-a-a2790899e3' Dec 13 01:28:05.882898 containerd[1811]: 2024-12-13 01:28:05.814 [INFO][5371] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.faa44f7872a184f7f5b37e8d6877bcc51c2a6936b51e0ec87343451c17bc2350" host="ci-4081.2.1-a-a2790899e3" Dec 13 01:28:05.882898 containerd[1811]: 2024-12-13 01:28:05.818 [INFO][5371] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.1-a-a2790899e3" Dec 13 01:28:05.882898 containerd[1811]: 2024-12-13 01:28:05.823 [INFO][5371] ipam/ipam.go 489: Trying affinity for 192.168.79.64/26 host="ci-4081.2.1-a-a2790899e3" Dec 13 01:28:05.882898 containerd[1811]: 2024-12-13 01:28:05.825 [INFO][5371] ipam/ipam.go 155: Attempting to load block cidr=192.168.79.64/26 host="ci-4081.2.1-a-a2790899e3" Dec 13 01:28:05.882898 containerd[1811]: 2024-12-13 01:28:05.830 [INFO][5371] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.79.64/26 host="ci-4081.2.1-a-a2790899e3" Dec 13 01:28:05.882898 containerd[1811]: 2024-12-13 01:28:05.830 [INFO][5371] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.79.64/26 handle="k8s-pod-network.faa44f7872a184f7f5b37e8d6877bcc51c2a6936b51e0ec87343451c17bc2350" host="ci-4081.2.1-a-a2790899e3" Dec 13 01:28:05.882898 containerd[1811]: 2024-12-13 01:28:05.833 [INFO][5371] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.faa44f7872a184f7f5b37e8d6877bcc51c2a6936b51e0ec87343451c17bc2350 Dec 13 01:28:05.882898 containerd[1811]: 2024-12-13 01:28:05.842 [INFO][5371] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.79.64/26 handle="k8s-pod-network.faa44f7872a184f7f5b37e8d6877bcc51c2a6936b51e0ec87343451c17bc2350" host="ci-4081.2.1-a-a2790899e3" Dec 13 01:28:05.882898 containerd[1811]: 2024-12-13 01:28:05.853 [INFO][5371] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.79.70/26] block=192.168.79.64/26 handle="k8s-pod-network.faa44f7872a184f7f5b37e8d6877bcc51c2a6936b51e0ec87343451c17bc2350" host="ci-4081.2.1-a-a2790899e3" Dec 13 01:28:05.882898 containerd[1811]: 2024-12-13 01:28:05.853 [INFO][5371] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.79.70/26] handle="k8s-pod-network.faa44f7872a184f7f5b37e8d6877bcc51c2a6936b51e0ec87343451c17bc2350" host="ci-4081.2.1-a-a2790899e3" Dec 13 01:28:05.882898 containerd[1811]: 2024-12-13 01:28:05.853 [INFO][5371] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:05.882898 containerd[1811]: 2024-12-13 01:28:05.853 [INFO][5371] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.79.70/26] IPv6=[] ContainerID="faa44f7872a184f7f5b37e8d6877bcc51c2a6936b51e0ec87343451c17bc2350" HandleID="k8s-pod-network.faa44f7872a184f7f5b37e8d6877bcc51c2a6936b51e0ec87343451c17bc2350" Workload="ci--4081.2.1--a--a2790899e3-k8s-coredns--76f75df574--7hccp-eth0" Dec 13 01:28:05.884983 containerd[1811]: 2024-12-13 01:28:05.856 [INFO][5358] cni-plugin/k8s.go 386: Populated endpoint ContainerID="faa44f7872a184f7f5b37e8d6877bcc51c2a6936b51e0ec87343451c17bc2350" Namespace="kube-system" Pod="coredns-76f75df574-7hccp" WorkloadEndpoint="ci--4081.2.1--a--a2790899e3-k8s-coredns--76f75df574--7hccp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--a2790899e3-k8s-coredns--76f75df574--7hccp-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"a1d94ea8-bc82-4759-bbe6-96e9a3a3933f", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 27, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-a2790899e3", ContainerID:"", Pod:"coredns-76f75df574-7hccp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.79.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7914d6e9403", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:05.884983 containerd[1811]: 2024-12-13 01:28:05.856 [INFO][5358] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.79.70/32] ContainerID="faa44f7872a184f7f5b37e8d6877bcc51c2a6936b51e0ec87343451c17bc2350" Namespace="kube-system" Pod="coredns-76f75df574-7hccp" WorkloadEndpoint="ci--4081.2.1--a--a2790899e3-k8s-coredns--76f75df574--7hccp-eth0" Dec 13 01:28:05.884983 containerd[1811]: 2024-12-13 01:28:05.856 [INFO][5358] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7914d6e9403 ContainerID="faa44f7872a184f7f5b37e8d6877bcc51c2a6936b51e0ec87343451c17bc2350" Namespace="kube-system" Pod="coredns-76f75df574-7hccp" WorkloadEndpoint="ci--4081.2.1--a--a2790899e3-k8s-coredns--76f75df574--7hccp-eth0" Dec 13 01:28:05.884983 containerd[1811]: 2024-12-13 01:28:05.862 [INFO][5358] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="faa44f7872a184f7f5b37e8d6877bcc51c2a6936b51e0ec87343451c17bc2350" Namespace="kube-system" Pod="coredns-76f75df574-7hccp" WorkloadEndpoint="ci--4081.2.1--a--a2790899e3-k8s-coredns--76f75df574--7hccp-eth0" Dec 13 01:28:05.884983 containerd[1811]: 2024-12-13 01:28:05.862 [INFO][5358] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="faa44f7872a184f7f5b37e8d6877bcc51c2a6936b51e0ec87343451c17bc2350" Namespace="kube-system" Pod="coredns-76f75df574-7hccp" WorkloadEndpoint="ci--4081.2.1--a--a2790899e3-k8s-coredns--76f75df574--7hccp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--a2790899e3-k8s-coredns--76f75df574--7hccp-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"a1d94ea8-bc82-4759-bbe6-96e9a3a3933f", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 27, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-a2790899e3", ContainerID:"faa44f7872a184f7f5b37e8d6877bcc51c2a6936b51e0ec87343451c17bc2350", Pod:"coredns-76f75df574-7hccp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.79.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7914d6e9403", MAC:"0e:05:b5:b9:50:14", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:05.884983 containerd[1811]: 2024-12-13 01:28:05.879 [INFO][5358] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="faa44f7872a184f7f5b37e8d6877bcc51c2a6936b51e0ec87343451c17bc2350" Namespace="kube-system" Pod="coredns-76f75df574-7hccp" WorkloadEndpoint="ci--4081.2.1--a--a2790899e3-k8s-coredns--76f75df574--7hccp-eth0" Dec 13 01:28:05.924884 containerd[1811]: time="2024-12-13T01:28:05.924708582Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:28:05.924884 containerd[1811]: time="2024-12-13T01:28:05.924768662Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:28:05.924884 containerd[1811]: time="2024-12-13T01:28:05.924784462Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:28:05.925567 containerd[1811]: time="2024-12-13T01:28:05.925213703Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:28:05.981507 containerd[1811]: time="2024-12-13T01:28:05.981398177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-7hccp,Uid:a1d94ea8-bc82-4759-bbe6-96e9a3a3933f,Namespace:kube-system,Attempt:1,} returns sandbox id \"faa44f7872a184f7f5b37e8d6877bcc51c2a6936b51e0ec87343451c17bc2350\"" Dec 13 01:28:05.985653 containerd[1811]: time="2024-12-13T01:28:05.985449985Z" level=info msg="CreateContainer within sandbox \"faa44f7872a184f7f5b37e8d6877bcc51c2a6936b51e0ec87343451c17bc2350\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:28:06.022192 containerd[1811]: time="2024-12-13T01:28:06.022122219Z" level=info msg="CreateContainer within sandbox \"faa44f7872a184f7f5b37e8d6877bcc51c2a6936b51e0ec87343451c17bc2350\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d60d9ac6197d915a0e0df2fda816ba0684b364f24c0da076da9268927d59cd71\"" Dec 13 01:28:06.027299 containerd[1811]: time="2024-12-13T01:28:06.026732748Z" level=info msg="StartContainer for \"d60d9ac6197d915a0e0df2fda816ba0684b364f24c0da076da9268927d59cd71\"" Dec 13 01:28:06.092171 containerd[1811]: time="2024-12-13T01:28:06.091856839Z" level=info msg="StartContainer for \"d60d9ac6197d915a0e0df2fda816ba0684b364f24c0da076da9268927d59cd71\" returns successfully" Dec 13 01:28:06.217551 systemd-networkd[1382]: cali247199f9299: Gained IPv6LL Dec 13 01:28:06.858390 kubelet[3412]: I1213 01:28:06.857688 3412 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-7hccp" podStartSLOduration=36.857646834 podStartE2EDuration="36.857646834s" podCreationTimestamp="2024-12-13 01:27:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:28:06.855754309 +0000 UTC m=+50.425291517" watchObservedRunningTime="2024-12-13 01:28:06.857646834 +0000 UTC m=+50.427184042" Dec 13 01:28:06.942937 containerd[1811]: time="2024-12-13T01:28:06.942885826Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:06.944741 containerd[1811]: time="2024-12-13T01:28:06.944707145Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=39298409" Dec 13 01:28:06.947650 containerd[1811]: time="2024-12-13T01:28:06.947619456Z" level=info msg="ImageCreate event name:\"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:06.952347 containerd[1811]: time="2024-12-13T01:28:06.952298323Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:06.953055 containerd[1811]: time="2024-12-13T01:28:06.953027331Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 3.129549981s" Dec 13 01:28:06.953239 containerd[1811]: time="2024-12-13T01:28:06.953133058Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Dec 13 01:28:06.954803 containerd[1811]: time="2024-12-13T01:28:06.954758124Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 01:28:06.955294 containerd[1811]: time="2024-12-13T01:28:06.955097347Z" level=info msg="CreateContainer within sandbox \"809aa90de02de60f809bcee2762bfd2e3947ea5d863284da874195894e51a98d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 01:28:06.988724 containerd[1811]: time="2024-12-13T01:28:06.988685390Z" level=info msg="CreateContainer within sandbox \"809aa90de02de60f809bcee2762bfd2e3947ea5d863284da874195894e51a98d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"06760fa6ea1e6627445d4a9d13b5afcc8ee75e74fe8ddd8bf583d4e80d904059\"" Dec 13 01:28:06.989946 containerd[1811]: time="2024-12-13T01:28:06.989902070Z" level=info msg="StartContainer for \"06760fa6ea1e6627445d4a9d13b5afcc8ee75e74fe8ddd8bf583d4e80d904059\"" Dec 13 01:28:07.055793 containerd[1811]: time="2024-12-13T01:28:07.055679825Z" level=info msg="StartContainer for \"06760fa6ea1e6627445d4a9d13b5afcc8ee75e74fe8ddd8bf583d4e80d904059\" returns successfully" Dec 13 01:28:07.259410 containerd[1811]: time="2024-12-13T01:28:07.258592937Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:07.261589 containerd[1811]: time="2024-12-13T01:28:07.261555412Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Dec 13 01:28:07.263721 containerd[1811]: time="2024-12-13T01:28:07.263675911Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 308.876703ms" Dec 13 01:28:07.263721 containerd[1811]: time="2024-12-13T01:28:07.263715153Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Dec 13 01:28:07.264270 containerd[1811]: time="2024-12-13T01:28:07.264234587Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Dec 13 01:28:07.266479 containerd[1811]: time="2024-12-13T01:28:07.265709444Z" level=info msg="CreateContainer within sandbox \"f6fbde65d4fa4a974094aecc3105c57764e608aaa97d44e46d24697b0765d720\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 01:28:07.300366 containerd[1811]: time="2024-12-13T01:28:07.300210388Z" level=info msg="CreateContainer within sandbox \"f6fbde65d4fa4a974094aecc3105c57764e608aaa97d44e46d24697b0765d720\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c7c9bd9873b5f2970d070d9a22266eec9961b34d585a05432095af6da65aba00\"" Dec 13 01:28:07.303483 containerd[1811]: time="2024-12-13T01:28:07.302364769Z" level=info msg="StartContainer for \"c7c9bd9873b5f2970d070d9a22266eec9961b34d585a05432095af6da65aba00\"" Dec 13 01:28:07.362044 containerd[1811]: time="2024-12-13T01:28:07.361999321Z" level=info msg="StartContainer for \"c7c9bd9873b5f2970d070d9a22266eec9961b34d585a05432095af6da65aba00\" returns successfully" Dec 13 01:28:07.561449 systemd-networkd[1382]: cali7914d6e9403: Gained IPv6LL Dec 13 01:28:07.873405 kubelet[3412]: I1213 01:28:07.871019 3412 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-8659b5f7c7-t7g7q" podStartSLOduration=26.43023689 podStartE2EDuration="30.870728223s" podCreationTimestamp="2024-12-13 01:27:37 +0000 UTC" firstStartedPulling="2024-12-13 01:28:02.513000108 +0000 UTC m=+46.082537316" lastFinishedPulling="2024-12-13 01:28:06.953491441 +0000 UTC m=+50.523028649" observedRunningTime="2024-12-13 01:28:07.869224572 +0000 UTC m=+51.438761780" watchObservedRunningTime="2024-12-13 01:28:07.870728223 +0000 UTC m=+51.440265431" Dec 13 01:28:08.853347 containerd[1811]: time="2024-12-13T01:28:08.853273396Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:08.855056 containerd[1811]: time="2024-12-13T01:28:08.855017528Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Dec 13 01:28:08.858840 containerd[1811]: time="2024-12-13T01:28:08.858784715Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:08.863963 containerd[1811]: time="2024-12-13T01:28:08.863583429Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:08.864604 containerd[1811]: time="2024-12-13T01:28:08.864536515Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 1.600266846s" Dec 13 01:28:08.864742 containerd[1811]: time="2024-12-13T01:28:08.864726517Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Dec 13 01:28:08.866807 containerd[1811]: time="2024-12-13T01:28:08.866784251Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Dec 13 01:28:08.868884 kubelet[3412]: I1213 01:28:08.868857 3412 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:28:08.870018 kubelet[3412]: I1213 01:28:08.869691 3412 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:28:08.870125 containerd[1811]: time="2024-12-13T01:28:08.869807913Z" level=info msg="CreateContainer within sandbox \"2420f2fd95e3c2d567d405034838b78e4332347381e3ebe733f8fab598e17825\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Dec 13 01:28:08.906386 containerd[1811]: time="2024-12-13T01:28:08.906333890Z" level=info msg="CreateContainer within sandbox \"2420f2fd95e3c2d567d405034838b78e4332347381e3ebe733f8fab598e17825\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"f2cab8b1b19ff792ddee5d08bdd4dc49b2e00f4cded1e7843c32884023b3b08f\"" Dec 13 01:28:08.908260 containerd[1811]: time="2024-12-13T01:28:08.907126176Z" level=info msg="StartContainer for \"f2cab8b1b19ff792ddee5d08bdd4dc49b2e00f4cded1e7843c32884023b3b08f\"" Dec 13 01:28:08.942925 systemd[1]: run-containerd-runc-k8s.io-f2cab8b1b19ff792ddee5d08bdd4dc49b2e00f4cded1e7843c32884023b3b08f-runc.jjxh72.mount: Deactivated successfully. Dec 13 01:28:08.975587 containerd[1811]: time="2024-12-13T01:28:08.975530978Z" level=info msg="StartContainer for \"f2cab8b1b19ff792ddee5d08bdd4dc49b2e00f4cded1e7843c32884023b3b08f\" returns successfully" Dec 13 01:28:09.693547 kubelet[3412]: I1213 01:28:09.693396 3412 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Dec 13 01:28:09.693547 kubelet[3412]: I1213 01:28:09.693427 3412 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Dec 13 01:28:09.892113 kubelet[3412]: I1213 01:28:09.891727 3412 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-8659b5f7c7-mrdjt" podStartSLOduration=28.583374029 podStartE2EDuration="32.891687323s" podCreationTimestamp="2024-12-13 01:27:37 +0000 UTC" firstStartedPulling="2024-12-13 01:28:02.95571864 +0000 UTC m=+46.525255808" lastFinishedPulling="2024-12-13 01:28:07.264031894 +0000 UTC m=+50.833569102" observedRunningTime="2024-12-13 01:28:07.900255271 +0000 UTC m=+51.469792519" watchObservedRunningTime="2024-12-13 01:28:09.891687323 +0000 UTC m=+53.461224531" Dec 13 01:28:09.892113 kubelet[3412]: I1213 01:28:09.892097 3412 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-8xjpw" podStartSLOduration=26.501898557 podStartE2EDuration="32.892076686s" podCreationTimestamp="2024-12-13 01:27:37 +0000 UTC" firstStartedPulling="2024-12-13 01:28:02.475114512 +0000 UTC m=+46.044651720" lastFinishedPulling="2024-12-13 01:28:08.865292641 +0000 UTC m=+52.434829849" observedRunningTime="2024-12-13 01:28:09.891459161 +0000 UTC m=+53.460996369" watchObservedRunningTime="2024-12-13 01:28:09.892076686 +0000 UTC m=+53.461613894" Dec 13 01:28:11.034354 containerd[1811]: time="2024-12-13T01:28:11.034301825Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:11.036926 containerd[1811]: time="2024-12-13T01:28:11.036748682Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=31953828" Dec 13 01:28:11.040245 containerd[1811]: time="2024-12-13T01:28:11.040218587Z" level=info msg="ImageCreate event name:\"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:11.046150 containerd[1811]: time="2024-12-13T01:28:11.046112389Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:11.046959 containerd[1811]: time="2024-12-13T01:28:11.046712833Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"33323450\" in 2.179767701s" Dec 13 01:28:11.046959 containerd[1811]: time="2024-12-13T01:28:11.046749673Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\"" Dec 13 01:28:11.059876 containerd[1811]: time="2024-12-13T01:28:11.059615404Z" level=info msg="CreateContainer within sandbox \"c57bbb7ed962a5a08f8cfaf9d18c781fd09ba0e341e9632003caffc6a7e49b85\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Dec 13 01:28:11.090811 containerd[1811]: time="2024-12-13T01:28:11.090764704Z" level=info msg="CreateContainer within sandbox \"c57bbb7ed962a5a08f8cfaf9d18c781fd09ba0e341e9632003caffc6a7e49b85\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"6c86627915b46df8cd958b4801da63ac364f52bcef3c9666d8ddf335eb5208d3\"" Dec 13 01:28:11.093340 containerd[1811]: time="2024-12-13T01:28:11.092471716Z" level=info msg="StartContainer for \"6c86627915b46df8cd958b4801da63ac364f52bcef3c9666d8ddf335eb5208d3\"" Dec 13 01:28:11.161187 containerd[1811]: time="2024-12-13T01:28:11.161101600Z" level=info msg="StartContainer for \"6c86627915b46df8cd958b4801da63ac364f52bcef3c9666d8ddf335eb5208d3\" returns successfully" Dec 13 01:28:11.943691 kubelet[3412]: I1213 01:28:11.941672 3412 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-64b868bd8-nmv69" podStartSLOduration=28.823252687 podStartE2EDuration="34.941624547s" podCreationTimestamp="2024-12-13 01:27:37 +0000 UTC" firstStartedPulling="2024-12-13 01:28:04.928819176 +0000 UTC m=+48.498356384" lastFinishedPulling="2024-12-13 01:28:11.047191076 +0000 UTC m=+54.616728244" observedRunningTime="2024-12-13 01:28:11.900436497 +0000 UTC m=+55.469973745" watchObservedRunningTime="2024-12-13 01:28:11.941624547 +0000 UTC m=+55.511161715" Dec 13 01:28:16.596366 containerd[1811]: time="2024-12-13T01:28:16.596284228Z" level=info msg="StopPodSandbox for \"d1236eb14042678ee890b48f5cab70f77b973ea777fa0a3e26de6a1f9fb4a5c2\"" Dec 13 01:28:16.678398 containerd[1811]: 2024-12-13 01:28:16.644 [WARNING][5684] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d1236eb14042678ee890b48f5cab70f77b973ea777fa0a3e26de6a1f9fb4a5c2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--a2790899e3-k8s-coredns--76f75df574--7hccp-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"a1d94ea8-bc82-4759-bbe6-96e9a3a3933f", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 27, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-a2790899e3", ContainerID:"faa44f7872a184f7f5b37e8d6877bcc51c2a6936b51e0ec87343451c17bc2350", Pod:"coredns-76f75df574-7hccp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.79.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7914d6e9403", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:16.678398 containerd[1811]: 2024-12-13 01:28:16.644 [INFO][5684] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d1236eb14042678ee890b48f5cab70f77b973ea777fa0a3e26de6a1f9fb4a5c2" Dec 13 01:28:16.678398 containerd[1811]: 2024-12-13 01:28:16.644 [INFO][5684] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d1236eb14042678ee890b48f5cab70f77b973ea777fa0a3e26de6a1f9fb4a5c2" iface="eth0" netns="" Dec 13 01:28:16.678398 containerd[1811]: 2024-12-13 01:28:16.644 [INFO][5684] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d1236eb14042678ee890b48f5cab70f77b973ea777fa0a3e26de6a1f9fb4a5c2" Dec 13 01:28:16.678398 containerd[1811]: 2024-12-13 01:28:16.644 [INFO][5684] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d1236eb14042678ee890b48f5cab70f77b973ea777fa0a3e26de6a1f9fb4a5c2" Dec 13 01:28:16.678398 containerd[1811]: 2024-12-13 01:28:16.664 [INFO][5690] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d1236eb14042678ee890b48f5cab70f77b973ea777fa0a3e26de6a1f9fb4a5c2" HandleID="k8s-pod-network.d1236eb14042678ee890b48f5cab70f77b973ea777fa0a3e26de6a1f9fb4a5c2" Workload="ci--4081.2.1--a--a2790899e3-k8s-coredns--76f75df574--7hccp-eth0" Dec 13 01:28:16.678398 containerd[1811]: 2024-12-13 01:28:16.665 [INFO][5690] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:16.678398 containerd[1811]: 2024-12-13 01:28:16.665 [INFO][5690] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:16.678398 containerd[1811]: 2024-12-13 01:28:16.674 [WARNING][5690] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d1236eb14042678ee890b48f5cab70f77b973ea777fa0a3e26de6a1f9fb4a5c2" HandleID="k8s-pod-network.d1236eb14042678ee890b48f5cab70f77b973ea777fa0a3e26de6a1f9fb4a5c2" Workload="ci--4081.2.1--a--a2790899e3-k8s-coredns--76f75df574--7hccp-eth0" Dec 13 01:28:16.678398 containerd[1811]: 2024-12-13 01:28:16.674 [INFO][5690] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d1236eb14042678ee890b48f5cab70f77b973ea777fa0a3e26de6a1f9fb4a5c2" HandleID="k8s-pod-network.d1236eb14042678ee890b48f5cab70f77b973ea777fa0a3e26de6a1f9fb4a5c2" Workload="ci--4081.2.1--a--a2790899e3-k8s-coredns--76f75df574--7hccp-eth0" Dec 13 01:28:16.678398 containerd[1811]: 2024-12-13 01:28:16.675 [INFO][5690] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:16.678398 containerd[1811]: 2024-12-13 01:28:16.676 [INFO][5684] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d1236eb14042678ee890b48f5cab70f77b973ea777fa0a3e26de6a1f9fb4a5c2" Dec 13 01:28:16.679475 containerd[1811]: time="2024-12-13T01:28:16.678382051Z" level=info msg="TearDown network for sandbox \"d1236eb14042678ee890b48f5cab70f77b973ea777fa0a3e26de6a1f9fb4a5c2\" successfully" Dec 13 01:28:16.679475 containerd[1811]: time="2024-12-13T01:28:16.678845654Z" level=info msg="StopPodSandbox for \"d1236eb14042678ee890b48f5cab70f77b973ea777fa0a3e26de6a1f9fb4a5c2\" returns successfully" Dec 13 01:28:16.679475 containerd[1811]: time="2024-12-13T01:28:16.679375337Z" level=info msg="RemovePodSandbox for \"d1236eb14042678ee890b48f5cab70f77b973ea777fa0a3e26de6a1f9fb4a5c2\"" Dec 13 01:28:16.679475 containerd[1811]: time="2024-12-13T01:28:16.679404258Z" level=info msg="Forcibly stopping sandbox \"d1236eb14042678ee890b48f5cab70f77b973ea777fa0a3e26de6a1f9fb4a5c2\"" Dec 13 01:28:16.761656 containerd[1811]: 2024-12-13 01:28:16.720 [WARNING][5708] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d1236eb14042678ee890b48f5cab70f77b973ea777fa0a3e26de6a1f9fb4a5c2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--a2790899e3-k8s-coredns--76f75df574--7hccp-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"a1d94ea8-bc82-4759-bbe6-96e9a3a3933f", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 27, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-a2790899e3", ContainerID:"faa44f7872a184f7f5b37e8d6877bcc51c2a6936b51e0ec87343451c17bc2350", Pod:"coredns-76f75df574-7hccp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.79.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7914d6e9403", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:16.761656 containerd[1811]: 2024-12-13 01:28:16.721 [INFO][5708] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d1236eb14042678ee890b48f5cab70f77b973ea777fa0a3e26de6a1f9fb4a5c2" Dec 13 01:28:16.761656 containerd[1811]: 2024-12-13 01:28:16.721 [INFO][5708] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d1236eb14042678ee890b48f5cab70f77b973ea777fa0a3e26de6a1f9fb4a5c2" iface="eth0" netns="" Dec 13 01:28:16.761656 containerd[1811]: 2024-12-13 01:28:16.721 [INFO][5708] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d1236eb14042678ee890b48f5cab70f77b973ea777fa0a3e26de6a1f9fb4a5c2" Dec 13 01:28:16.761656 containerd[1811]: 2024-12-13 01:28:16.721 [INFO][5708] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d1236eb14042678ee890b48f5cab70f77b973ea777fa0a3e26de6a1f9fb4a5c2" Dec 13 01:28:16.761656 containerd[1811]: 2024-12-13 01:28:16.745 [INFO][5714] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d1236eb14042678ee890b48f5cab70f77b973ea777fa0a3e26de6a1f9fb4a5c2" HandleID="k8s-pod-network.d1236eb14042678ee890b48f5cab70f77b973ea777fa0a3e26de6a1f9fb4a5c2" Workload="ci--4081.2.1--a--a2790899e3-k8s-coredns--76f75df574--7hccp-eth0" Dec 13 01:28:16.761656 containerd[1811]: 2024-12-13 01:28:16.745 [INFO][5714] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:16.761656 containerd[1811]: 2024-12-13 01:28:16.745 [INFO][5714] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:16.761656 containerd[1811]: 2024-12-13 01:28:16.756 [WARNING][5714] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d1236eb14042678ee890b48f5cab70f77b973ea777fa0a3e26de6a1f9fb4a5c2" HandleID="k8s-pod-network.d1236eb14042678ee890b48f5cab70f77b973ea777fa0a3e26de6a1f9fb4a5c2" Workload="ci--4081.2.1--a--a2790899e3-k8s-coredns--76f75df574--7hccp-eth0" Dec 13 01:28:16.761656 containerd[1811]: 2024-12-13 01:28:16.756 [INFO][5714] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d1236eb14042678ee890b48f5cab70f77b973ea777fa0a3e26de6a1f9fb4a5c2" HandleID="k8s-pod-network.d1236eb14042678ee890b48f5cab70f77b973ea777fa0a3e26de6a1f9fb4a5c2" Workload="ci--4081.2.1--a--a2790899e3-k8s-coredns--76f75df574--7hccp-eth0" Dec 13 01:28:16.761656 containerd[1811]: 2024-12-13 01:28:16.758 [INFO][5714] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:16.761656 containerd[1811]: 2024-12-13 01:28:16.759 [INFO][5708] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d1236eb14042678ee890b48f5cab70f77b973ea777fa0a3e26de6a1f9fb4a5c2" Dec 13 01:28:16.761656 containerd[1811]: time="2024-12-13T01:28:16.761578721Z" level=info msg="TearDown network for sandbox \"d1236eb14042678ee890b48f5cab70f77b973ea777fa0a3e26de6a1f9fb4a5c2\" successfully" Dec 13 01:28:16.772095 containerd[1811]: time="2024-12-13T01:28:16.771917709Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d1236eb14042678ee890b48f5cab70f77b973ea777fa0a3e26de6a1f9fb4a5c2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:28:16.772095 containerd[1811]: time="2024-12-13T01:28:16.771987870Z" level=info msg="RemovePodSandbox \"d1236eb14042678ee890b48f5cab70f77b973ea777fa0a3e26de6a1f9fb4a5c2\" returns successfully" Dec 13 01:28:16.772610 containerd[1811]: time="2024-12-13T01:28:16.772567474Z" level=info msg="StopPodSandbox for \"c27a2aa3797a49e98e5551810ad3c42db19c64ec063b00abe0c705b796e9c55e\"" Dec 13 01:28:16.874265 containerd[1811]: 2024-12-13 01:28:16.815 [WARNING][5733] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c27a2aa3797a49e98e5551810ad3c42db19c64ec063b00abe0c705b796e9c55e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--a2790899e3-k8s-calico--apiserver--8659b5f7c7--mrdjt-eth0", GenerateName:"calico-apiserver-8659b5f7c7-", Namespace:"calico-apiserver", SelfLink:"", UID:"c28ab2d8-49cf-45a5-b55c-a48ac6236be7", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 27, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8659b5f7c7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-a2790899e3", ContainerID:"f6fbde65d4fa4a974094aecc3105c57764e608aaa97d44e46d24697b0765d720", Pod:"calico-apiserver-8659b5f7c7-mrdjt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.79.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali161aff0def4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:16.874265 containerd[1811]: 2024-12-13 01:28:16.817 [INFO][5733] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c27a2aa3797a49e98e5551810ad3c42db19c64ec063b00abe0c705b796e9c55e" Dec 13 01:28:16.874265 containerd[1811]: 2024-12-13 01:28:16.817 [INFO][5733] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c27a2aa3797a49e98e5551810ad3c42db19c64ec063b00abe0c705b796e9c55e" iface="eth0" netns="" Dec 13 01:28:16.874265 containerd[1811]: 2024-12-13 01:28:16.817 [INFO][5733] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c27a2aa3797a49e98e5551810ad3c42db19c64ec063b00abe0c705b796e9c55e" Dec 13 01:28:16.874265 containerd[1811]: 2024-12-13 01:28:16.817 [INFO][5733] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c27a2aa3797a49e98e5551810ad3c42db19c64ec063b00abe0c705b796e9c55e" Dec 13 01:28:16.874265 containerd[1811]: 2024-12-13 01:28:16.856 [INFO][5739] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c27a2aa3797a49e98e5551810ad3c42db19c64ec063b00abe0c705b796e9c55e" HandleID="k8s-pod-network.c27a2aa3797a49e98e5551810ad3c42db19c64ec063b00abe0c705b796e9c55e" Workload="ci--4081.2.1--a--a2790899e3-k8s-calico--apiserver--8659b5f7c7--mrdjt-eth0" Dec 13 01:28:16.874265 containerd[1811]: 2024-12-13 01:28:16.856 [INFO][5739] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:16.874265 containerd[1811]: 2024-12-13 01:28:16.856 [INFO][5739] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:16.874265 containerd[1811]: 2024-12-13 01:28:16.869 [WARNING][5739] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c27a2aa3797a49e98e5551810ad3c42db19c64ec063b00abe0c705b796e9c55e" HandleID="k8s-pod-network.c27a2aa3797a49e98e5551810ad3c42db19c64ec063b00abe0c705b796e9c55e" Workload="ci--4081.2.1--a--a2790899e3-k8s-calico--apiserver--8659b5f7c7--mrdjt-eth0" Dec 13 01:28:16.874265 containerd[1811]: 2024-12-13 01:28:16.869 [INFO][5739] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c27a2aa3797a49e98e5551810ad3c42db19c64ec063b00abe0c705b796e9c55e" HandleID="k8s-pod-network.c27a2aa3797a49e98e5551810ad3c42db19c64ec063b00abe0c705b796e9c55e" Workload="ci--4081.2.1--a--a2790899e3-k8s-calico--apiserver--8659b5f7c7--mrdjt-eth0" Dec 13 01:28:16.874265 containerd[1811]: 2024-12-13 01:28:16.871 [INFO][5739] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:16.874265 containerd[1811]: 2024-12-13 01:28:16.872 [INFO][5733] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c27a2aa3797a49e98e5551810ad3c42db19c64ec063b00abe0c705b796e9c55e" Dec 13 01:28:16.874265 containerd[1811]: time="2024-12-13T01:28:16.874216706Z" level=info msg="TearDown network for sandbox \"c27a2aa3797a49e98e5551810ad3c42db19c64ec063b00abe0c705b796e9c55e\" successfully" Dec 13 01:28:16.874265 containerd[1811]: time="2024-12-13T01:28:16.874244026Z" level=info msg="StopPodSandbox for \"c27a2aa3797a49e98e5551810ad3c42db19c64ec063b00abe0c705b796e9c55e\" returns successfully" Dec 13 01:28:16.876263 containerd[1811]: time="2024-12-13T01:28:16.875873477Z" level=info msg="RemovePodSandbox for \"c27a2aa3797a49e98e5551810ad3c42db19c64ec063b00abe0c705b796e9c55e\"" Dec 13 01:28:16.876263 containerd[1811]: time="2024-12-13T01:28:16.875915077Z" level=info msg="Forcibly stopping sandbox \"c27a2aa3797a49e98e5551810ad3c42db19c64ec063b00abe0c705b796e9c55e\"" Dec 13 01:28:16.952966 containerd[1811]: 2024-12-13 01:28:16.918 [WARNING][5758] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c27a2aa3797a49e98e5551810ad3c42db19c64ec063b00abe0c705b796e9c55e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--a2790899e3-k8s-calico--apiserver--8659b5f7c7--mrdjt-eth0", GenerateName:"calico-apiserver-8659b5f7c7-", Namespace:"calico-apiserver", SelfLink:"", UID:"c28ab2d8-49cf-45a5-b55c-a48ac6236be7", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 27, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8659b5f7c7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-a2790899e3", ContainerID:"f6fbde65d4fa4a974094aecc3105c57764e608aaa97d44e46d24697b0765d720", Pod:"calico-apiserver-8659b5f7c7-mrdjt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.79.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali161aff0def4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:16.952966 containerd[1811]: 2024-12-13 01:28:16.918 [INFO][5758] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c27a2aa3797a49e98e5551810ad3c42db19c64ec063b00abe0c705b796e9c55e" Dec 13 01:28:16.952966 containerd[1811]: 2024-12-13 01:28:16.918 [INFO][5758] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c27a2aa3797a49e98e5551810ad3c42db19c64ec063b00abe0c705b796e9c55e" iface="eth0" netns="" Dec 13 01:28:16.952966 containerd[1811]: 2024-12-13 01:28:16.918 [INFO][5758] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c27a2aa3797a49e98e5551810ad3c42db19c64ec063b00abe0c705b796e9c55e" Dec 13 01:28:16.952966 containerd[1811]: 2024-12-13 01:28:16.918 [INFO][5758] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c27a2aa3797a49e98e5551810ad3c42db19c64ec063b00abe0c705b796e9c55e" Dec 13 01:28:16.952966 containerd[1811]: 2024-12-13 01:28:16.938 [INFO][5765] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c27a2aa3797a49e98e5551810ad3c42db19c64ec063b00abe0c705b796e9c55e" HandleID="k8s-pod-network.c27a2aa3797a49e98e5551810ad3c42db19c64ec063b00abe0c705b796e9c55e" Workload="ci--4081.2.1--a--a2790899e3-k8s-calico--apiserver--8659b5f7c7--mrdjt-eth0" Dec 13 01:28:16.952966 containerd[1811]: 2024-12-13 01:28:16.938 [INFO][5765] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:16.952966 containerd[1811]: 2024-12-13 01:28:16.938 [INFO][5765] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:16.952966 containerd[1811]: 2024-12-13 01:28:16.947 [WARNING][5765] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c27a2aa3797a49e98e5551810ad3c42db19c64ec063b00abe0c705b796e9c55e" HandleID="k8s-pod-network.c27a2aa3797a49e98e5551810ad3c42db19c64ec063b00abe0c705b796e9c55e" Workload="ci--4081.2.1--a--a2790899e3-k8s-calico--apiserver--8659b5f7c7--mrdjt-eth0" Dec 13 01:28:16.952966 containerd[1811]: 2024-12-13 01:28:16.947 [INFO][5765] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c27a2aa3797a49e98e5551810ad3c42db19c64ec063b00abe0c705b796e9c55e" HandleID="k8s-pod-network.c27a2aa3797a49e98e5551810ad3c42db19c64ec063b00abe0c705b796e9c55e" Workload="ci--4081.2.1--a--a2790899e3-k8s-calico--apiserver--8659b5f7c7--mrdjt-eth0" Dec 13 01:28:16.952966 containerd[1811]: 2024-12-13 01:28:16.949 [INFO][5765] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:16.952966 containerd[1811]: 2024-12-13 01:28:16.950 [INFO][5758] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c27a2aa3797a49e98e5551810ad3c42db19c64ec063b00abe0c705b796e9c55e" Dec 13 01:28:16.953820 containerd[1811]: time="2024-12-13T01:28:16.953460590Z" level=info msg="TearDown network for sandbox \"c27a2aa3797a49e98e5551810ad3c42db19c64ec063b00abe0c705b796e9c55e\" successfully" Dec 13 01:28:16.963688 containerd[1811]: time="2024-12-13T01:28:16.963504376Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c27a2aa3797a49e98e5551810ad3c42db19c64ec063b00abe0c705b796e9c55e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:28:16.963688 containerd[1811]: time="2024-12-13T01:28:16.963585457Z" level=info msg="RemovePodSandbox \"c27a2aa3797a49e98e5551810ad3c42db19c64ec063b00abe0c705b796e9c55e\" returns successfully" Dec 13 01:28:16.964487 containerd[1811]: time="2024-12-13T01:28:16.964211461Z" level=info msg="StopPodSandbox for \"ff36528399291c8c0fd75688fcbf6a6ab9e4d4be343ecf7d46bec379d5474ede\"" Dec 13 01:28:17.037902 containerd[1811]: 2024-12-13 01:28:17.004 [WARNING][5784] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ff36528399291c8c0fd75688fcbf6a6ab9e4d4be343ecf7d46bec379d5474ede" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--a2790899e3-k8s-calico--apiserver--8659b5f7c7--t7g7q-eth0", GenerateName:"calico-apiserver-8659b5f7c7-", Namespace:"calico-apiserver", SelfLink:"", UID:"630b6a63-ec70-4f62-be70-0062014125b5", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 27, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8659b5f7c7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-a2790899e3", ContainerID:"809aa90de02de60f809bcee2762bfd2e3947ea5d863284da874195894e51a98d", Pod:"calico-apiserver-8659b5f7c7-t7g7q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.79.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1097904c398", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:17.037902 containerd[1811]: 2024-12-13 01:28:17.005 [INFO][5784] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ff36528399291c8c0fd75688fcbf6a6ab9e4d4be343ecf7d46bec379d5474ede" Dec 13 01:28:17.037902 containerd[1811]: 2024-12-13 01:28:17.005 [INFO][5784] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ff36528399291c8c0fd75688fcbf6a6ab9e4d4be343ecf7d46bec379d5474ede" iface="eth0" netns="" Dec 13 01:28:17.037902 containerd[1811]: 2024-12-13 01:28:17.005 [INFO][5784] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ff36528399291c8c0fd75688fcbf6a6ab9e4d4be343ecf7d46bec379d5474ede" Dec 13 01:28:17.037902 containerd[1811]: 2024-12-13 01:28:17.005 [INFO][5784] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ff36528399291c8c0fd75688fcbf6a6ab9e4d4be343ecf7d46bec379d5474ede" Dec 13 01:28:17.037902 containerd[1811]: 2024-12-13 01:28:17.024 [INFO][5791] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ff36528399291c8c0fd75688fcbf6a6ab9e4d4be343ecf7d46bec379d5474ede" HandleID="k8s-pod-network.ff36528399291c8c0fd75688fcbf6a6ab9e4d4be343ecf7d46bec379d5474ede" Workload="ci--4081.2.1--a--a2790899e3-k8s-calico--apiserver--8659b5f7c7--t7g7q-eth0" Dec 13 01:28:17.037902 containerd[1811]: 2024-12-13 01:28:17.024 [INFO][5791] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:17.037902 containerd[1811]: 2024-12-13 01:28:17.025 [INFO][5791] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:17.037902 containerd[1811]: 2024-12-13 01:28:17.033 [WARNING][5791] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ff36528399291c8c0fd75688fcbf6a6ab9e4d4be343ecf7d46bec379d5474ede" HandleID="k8s-pod-network.ff36528399291c8c0fd75688fcbf6a6ab9e4d4be343ecf7d46bec379d5474ede" Workload="ci--4081.2.1--a--a2790899e3-k8s-calico--apiserver--8659b5f7c7--t7g7q-eth0" Dec 13 01:28:17.037902 containerd[1811]: 2024-12-13 01:28:17.033 [INFO][5791] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ff36528399291c8c0fd75688fcbf6a6ab9e4d4be343ecf7d46bec379d5474ede" HandleID="k8s-pod-network.ff36528399291c8c0fd75688fcbf6a6ab9e4d4be343ecf7d46bec379d5474ede" Workload="ci--4081.2.1--a--a2790899e3-k8s-calico--apiserver--8659b5f7c7--t7g7q-eth0" Dec 13 01:28:17.037902 containerd[1811]: 2024-12-13 01:28:17.034 [INFO][5791] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:17.037902 containerd[1811]: 2024-12-13 01:28:17.036 [INFO][5784] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ff36528399291c8c0fd75688fcbf6a6ab9e4d4be343ecf7d46bec379d5474ede" Dec 13 01:28:17.038643 containerd[1811]: time="2024-12-13T01:28:17.037992148Z" level=info msg="TearDown network for sandbox \"ff36528399291c8c0fd75688fcbf6a6ab9e4d4be343ecf7d46bec379d5474ede\" successfully" Dec 13 01:28:17.038643 containerd[1811]: time="2024-12-13T01:28:17.038021389Z" level=info msg="StopPodSandbox for \"ff36528399291c8c0fd75688fcbf6a6ab9e4d4be343ecf7d46bec379d5474ede\" returns successfully" Dec 13 01:28:17.039187 containerd[1811]: time="2024-12-13T01:28:17.039118876Z" level=info msg="RemovePodSandbox for \"ff36528399291c8c0fd75688fcbf6a6ab9e4d4be343ecf7d46bec379d5474ede\"" Dec 13 01:28:17.039187 containerd[1811]: time="2024-12-13T01:28:17.039157276Z" level=info msg="Forcibly stopping sandbox \"ff36528399291c8c0fd75688fcbf6a6ab9e4d4be343ecf7d46bec379d5474ede\"" Dec 13 01:28:17.117197 containerd[1811]: 2024-12-13 01:28:17.081 [WARNING][5809] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ff36528399291c8c0fd75688fcbf6a6ab9e4d4be343ecf7d46bec379d5474ede" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--a2790899e3-k8s-calico--apiserver--8659b5f7c7--t7g7q-eth0", GenerateName:"calico-apiserver-8659b5f7c7-", Namespace:"calico-apiserver", SelfLink:"", UID:"630b6a63-ec70-4f62-be70-0062014125b5", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 27, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8659b5f7c7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-a2790899e3", ContainerID:"809aa90de02de60f809bcee2762bfd2e3947ea5d863284da874195894e51a98d", Pod:"calico-apiserver-8659b5f7c7-t7g7q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.79.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1097904c398", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:17.117197 containerd[1811]: 2024-12-13 01:28:17.082 [INFO][5809] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ff36528399291c8c0fd75688fcbf6a6ab9e4d4be343ecf7d46bec379d5474ede" Dec 13 01:28:17.117197 containerd[1811]: 2024-12-13 01:28:17.082 [INFO][5809] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ff36528399291c8c0fd75688fcbf6a6ab9e4d4be343ecf7d46bec379d5474ede" iface="eth0" netns="" Dec 13 01:28:17.117197 containerd[1811]: 2024-12-13 01:28:17.082 [INFO][5809] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ff36528399291c8c0fd75688fcbf6a6ab9e4d4be343ecf7d46bec379d5474ede" Dec 13 01:28:17.117197 containerd[1811]: 2024-12-13 01:28:17.082 [INFO][5809] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ff36528399291c8c0fd75688fcbf6a6ab9e4d4be343ecf7d46bec379d5474ede" Dec 13 01:28:17.117197 containerd[1811]: 2024-12-13 01:28:17.102 [INFO][5815] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ff36528399291c8c0fd75688fcbf6a6ab9e4d4be343ecf7d46bec379d5474ede" HandleID="k8s-pod-network.ff36528399291c8c0fd75688fcbf6a6ab9e4d4be343ecf7d46bec379d5474ede" Workload="ci--4081.2.1--a--a2790899e3-k8s-calico--apiserver--8659b5f7c7--t7g7q-eth0" Dec 13 01:28:17.117197 containerd[1811]: 2024-12-13 01:28:17.102 [INFO][5815] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:17.117197 containerd[1811]: 2024-12-13 01:28:17.102 [INFO][5815] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:17.117197 containerd[1811]: 2024-12-13 01:28:17.112 [WARNING][5815] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ff36528399291c8c0fd75688fcbf6a6ab9e4d4be343ecf7d46bec379d5474ede" HandleID="k8s-pod-network.ff36528399291c8c0fd75688fcbf6a6ab9e4d4be343ecf7d46bec379d5474ede" Workload="ci--4081.2.1--a--a2790899e3-k8s-calico--apiserver--8659b5f7c7--t7g7q-eth0" Dec 13 01:28:17.117197 containerd[1811]: 2024-12-13 01:28:17.112 [INFO][5815] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ff36528399291c8c0fd75688fcbf6a6ab9e4d4be343ecf7d46bec379d5474ede" HandleID="k8s-pod-network.ff36528399291c8c0fd75688fcbf6a6ab9e4d4be343ecf7d46bec379d5474ede" Workload="ci--4081.2.1--a--a2790899e3-k8s-calico--apiserver--8659b5f7c7--t7g7q-eth0" Dec 13 01:28:17.117197 containerd[1811]: 2024-12-13 01:28:17.114 [INFO][5815] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:17.117197 containerd[1811]: 2024-12-13 01:28:17.115 [INFO][5809] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ff36528399291c8c0fd75688fcbf6a6ab9e4d4be343ecf7d46bec379d5474ede" Dec 13 01:28:17.117659 containerd[1811]: time="2024-12-13T01:28:17.117281193Z" level=info msg="TearDown network for sandbox \"ff36528399291c8c0fd75688fcbf6a6ab9e4d4be343ecf7d46bec379d5474ede\" successfully" Dec 13 01:28:17.124490 containerd[1811]: time="2024-12-13T01:28:17.124364279Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ff36528399291c8c0fd75688fcbf6a6ab9e4d4be343ecf7d46bec379d5474ede\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:28:17.124490 containerd[1811]: time="2024-12-13T01:28:17.124448600Z" level=info msg="RemovePodSandbox \"ff36528399291c8c0fd75688fcbf6a6ab9e4d4be343ecf7d46bec379d5474ede\" returns successfully" Dec 13 01:28:17.124912 containerd[1811]: time="2024-12-13T01:28:17.124878483Z" level=info msg="StopPodSandbox for \"b341f141d5fd5be20330ae636a63a269b1fb69a554b6a3f4bfe12feb6af4e806\"" Dec 13 01:28:17.194301 containerd[1811]: 2024-12-13 01:28:17.161 [WARNING][5834] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b341f141d5fd5be20330ae636a63a269b1fb69a554b6a3f4bfe12feb6af4e806" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--a2790899e3-k8s-csi--node--driver--8xjpw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"31a49e50-08a6-4d91-bf9f-b4d8e0e1e065", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 27, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-a2790899e3", ContainerID:"2420f2fd95e3c2d567d405034838b78e4332347381e3ebe733f8fab598e17825", Pod:"csi-node-driver-8xjpw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.79.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali08417ee6a5c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:17.194301 containerd[1811]: 2024-12-13 01:28:17.162 [INFO][5834] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b341f141d5fd5be20330ae636a63a269b1fb69a554b6a3f4bfe12feb6af4e806" Dec 13 01:28:17.194301 containerd[1811]: 2024-12-13 01:28:17.162 [INFO][5834] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b341f141d5fd5be20330ae636a63a269b1fb69a554b6a3f4bfe12feb6af4e806" iface="eth0" netns="" Dec 13 01:28:17.194301 containerd[1811]: 2024-12-13 01:28:17.162 [INFO][5834] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b341f141d5fd5be20330ae636a63a269b1fb69a554b6a3f4bfe12feb6af4e806" Dec 13 01:28:17.194301 containerd[1811]: 2024-12-13 01:28:17.162 [INFO][5834] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b341f141d5fd5be20330ae636a63a269b1fb69a554b6a3f4bfe12feb6af4e806" Dec 13 01:28:17.194301 containerd[1811]: 2024-12-13 01:28:17.181 [INFO][5840] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b341f141d5fd5be20330ae636a63a269b1fb69a554b6a3f4bfe12feb6af4e806" HandleID="k8s-pod-network.b341f141d5fd5be20330ae636a63a269b1fb69a554b6a3f4bfe12feb6af4e806" Workload="ci--4081.2.1--a--a2790899e3-k8s-csi--node--driver--8xjpw-eth0" Dec 13 01:28:17.194301 containerd[1811]: 2024-12-13 01:28:17.181 [INFO][5840] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:17.194301 containerd[1811]: 2024-12-13 01:28:17.182 [INFO][5840] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:17.194301 containerd[1811]: 2024-12-13 01:28:17.190 [WARNING][5840] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b341f141d5fd5be20330ae636a63a269b1fb69a554b6a3f4bfe12feb6af4e806" HandleID="k8s-pod-network.b341f141d5fd5be20330ae636a63a269b1fb69a554b6a3f4bfe12feb6af4e806" Workload="ci--4081.2.1--a--a2790899e3-k8s-csi--node--driver--8xjpw-eth0" Dec 13 01:28:17.194301 containerd[1811]: 2024-12-13 01:28:17.190 [INFO][5840] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b341f141d5fd5be20330ae636a63a269b1fb69a554b6a3f4bfe12feb6af4e806" HandleID="k8s-pod-network.b341f141d5fd5be20330ae636a63a269b1fb69a554b6a3f4bfe12feb6af4e806" Workload="ci--4081.2.1--a--a2790899e3-k8s-csi--node--driver--8xjpw-eth0" Dec 13 01:28:17.194301 containerd[1811]: 2024-12-13 01:28:17.191 [INFO][5840] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:17.194301 containerd[1811]: 2024-12-13 01:28:17.193 [INFO][5834] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b341f141d5fd5be20330ae636a63a269b1fb69a554b6a3f4bfe12feb6af4e806" Dec 13 01:28:17.194691 containerd[1811]: time="2024-12-13T01:28:17.194359622Z" level=info msg="TearDown network for sandbox \"b341f141d5fd5be20330ae636a63a269b1fb69a554b6a3f4bfe12feb6af4e806\" successfully" Dec 13 01:28:17.194691 containerd[1811]: time="2024-12-13T01:28:17.194393942Z" level=info msg="StopPodSandbox for \"b341f141d5fd5be20330ae636a63a269b1fb69a554b6a3f4bfe12feb6af4e806\" returns successfully" Dec 13 01:28:17.195094 containerd[1811]: time="2024-12-13T01:28:17.195063947Z" level=info msg="RemovePodSandbox for \"b341f141d5fd5be20330ae636a63a269b1fb69a554b6a3f4bfe12feb6af4e806\"" Dec 13 01:28:17.195129 containerd[1811]: time="2024-12-13T01:28:17.195120027Z" level=info msg="Forcibly stopping sandbox \"b341f141d5fd5be20330ae636a63a269b1fb69a554b6a3f4bfe12feb6af4e806\"" Dec 13 01:28:17.263988 containerd[1811]: 2024-12-13 01:28:17.231 [WARNING][5858] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b341f141d5fd5be20330ae636a63a269b1fb69a554b6a3f4bfe12feb6af4e806" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--a2790899e3-k8s-csi--node--driver--8xjpw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"31a49e50-08a6-4d91-bf9f-b4d8e0e1e065", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 27, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-a2790899e3", ContainerID:"2420f2fd95e3c2d567d405034838b78e4332347381e3ebe733f8fab598e17825", Pod:"csi-node-driver-8xjpw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.79.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali08417ee6a5c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:17.263988 containerd[1811]: 2024-12-13 01:28:17.231 [INFO][5858] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b341f141d5fd5be20330ae636a63a269b1fb69a554b6a3f4bfe12feb6af4e806" Dec 13 01:28:17.263988 containerd[1811]: 2024-12-13 01:28:17.231 [INFO][5858] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b341f141d5fd5be20330ae636a63a269b1fb69a554b6a3f4bfe12feb6af4e806" iface="eth0" netns="" Dec 13 01:28:17.263988 containerd[1811]: 2024-12-13 01:28:17.231 [INFO][5858] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b341f141d5fd5be20330ae636a63a269b1fb69a554b6a3f4bfe12feb6af4e806" Dec 13 01:28:17.263988 containerd[1811]: 2024-12-13 01:28:17.231 [INFO][5858] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b341f141d5fd5be20330ae636a63a269b1fb69a554b6a3f4bfe12feb6af4e806" Dec 13 01:28:17.263988 containerd[1811]: 2024-12-13 01:28:17.250 [INFO][5864] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b341f141d5fd5be20330ae636a63a269b1fb69a554b6a3f4bfe12feb6af4e806" HandleID="k8s-pod-network.b341f141d5fd5be20330ae636a63a269b1fb69a554b6a3f4bfe12feb6af4e806" Workload="ci--4081.2.1--a--a2790899e3-k8s-csi--node--driver--8xjpw-eth0" Dec 13 01:28:17.263988 containerd[1811]: 2024-12-13 01:28:17.250 [INFO][5864] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:17.263988 containerd[1811]: 2024-12-13 01:28:17.250 [INFO][5864] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:17.263988 containerd[1811]: 2024-12-13 01:28:17.258 [WARNING][5864] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b341f141d5fd5be20330ae636a63a269b1fb69a554b6a3f4bfe12feb6af4e806" HandleID="k8s-pod-network.b341f141d5fd5be20330ae636a63a269b1fb69a554b6a3f4bfe12feb6af4e806" Workload="ci--4081.2.1--a--a2790899e3-k8s-csi--node--driver--8xjpw-eth0" Dec 13 01:28:17.263988 containerd[1811]: 2024-12-13 01:28:17.258 [INFO][5864] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b341f141d5fd5be20330ae636a63a269b1fb69a554b6a3f4bfe12feb6af4e806" HandleID="k8s-pod-network.b341f141d5fd5be20330ae636a63a269b1fb69a554b6a3f4bfe12feb6af4e806" Workload="ci--4081.2.1--a--a2790899e3-k8s-csi--node--driver--8xjpw-eth0" Dec 13 01:28:17.263988 containerd[1811]: 2024-12-13 01:28:17.260 [INFO][5864] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:17.263988 containerd[1811]: 2024-12-13 01:28:17.262 [INFO][5858] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b341f141d5fd5be20330ae636a63a269b1fb69a554b6a3f4bfe12feb6af4e806" Dec 13 01:28:17.264395 containerd[1811]: time="2024-12-13T01:28:17.264039843Z" level=info msg="TearDown network for sandbox \"b341f141d5fd5be20330ae636a63a269b1fb69a554b6a3f4bfe12feb6af4e806\" successfully" Dec 13 01:28:17.279897 containerd[1811]: time="2024-12-13T01:28:17.279838267Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b341f141d5fd5be20330ae636a63a269b1fb69a554b6a3f4bfe12feb6af4e806\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:28:17.279988 containerd[1811]: time="2024-12-13T01:28:17.279966468Z" level=info msg="RemovePodSandbox \"b341f141d5fd5be20330ae636a63a269b1fb69a554b6a3f4bfe12feb6af4e806\" returns successfully" Dec 13 01:28:17.280588 containerd[1811]: time="2024-12-13T01:28:17.280562032Z" level=info msg="StopPodSandbox for \"165b55e6a7a58a195b5814a22ddbbb477c844710b5b2d51b438e7992e1e69372\"" Dec 13 01:28:17.353157 containerd[1811]: 2024-12-13 01:28:17.319 [WARNING][5882] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="165b55e6a7a58a195b5814a22ddbbb477c844710b5b2d51b438e7992e1e69372" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--a2790899e3-k8s-coredns--76f75df574--h7l4p-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"615b4d62-a251-4f67-a6ae-4331125f9266", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 27, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-a2790899e3", ContainerID:"931826f6934cc4937ad3ec70d28a0385ccafee6b640de96becfe900844eebef8", Pod:"coredns-76f75df574-h7l4p", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.79.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidbf8d2ccca6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:17.353157 containerd[1811]: 2024-12-13 01:28:17.319 [INFO][5882] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="165b55e6a7a58a195b5814a22ddbbb477c844710b5b2d51b438e7992e1e69372" Dec 13 01:28:17.353157 containerd[1811]: 2024-12-13 01:28:17.319 [INFO][5882] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="165b55e6a7a58a195b5814a22ddbbb477c844710b5b2d51b438e7992e1e69372" iface="eth0" netns="" Dec 13 01:28:17.353157 containerd[1811]: 2024-12-13 01:28:17.319 [INFO][5882] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="165b55e6a7a58a195b5814a22ddbbb477c844710b5b2d51b438e7992e1e69372" Dec 13 01:28:17.353157 containerd[1811]: 2024-12-13 01:28:17.319 [INFO][5882] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="165b55e6a7a58a195b5814a22ddbbb477c844710b5b2d51b438e7992e1e69372" Dec 13 01:28:17.353157 containerd[1811]: 2024-12-13 01:28:17.339 [INFO][5888] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="165b55e6a7a58a195b5814a22ddbbb477c844710b5b2d51b438e7992e1e69372" HandleID="k8s-pod-network.165b55e6a7a58a195b5814a22ddbbb477c844710b5b2d51b438e7992e1e69372" Workload="ci--4081.2.1--a--a2790899e3-k8s-coredns--76f75df574--h7l4p-eth0" Dec 13 01:28:17.353157 containerd[1811]: 2024-12-13 01:28:17.340 [INFO][5888] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:17.353157 containerd[1811]: 2024-12-13 01:28:17.340 [INFO][5888] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:17.353157 containerd[1811]: 2024-12-13 01:28:17.348 [WARNING][5888] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="165b55e6a7a58a195b5814a22ddbbb477c844710b5b2d51b438e7992e1e69372" HandleID="k8s-pod-network.165b55e6a7a58a195b5814a22ddbbb477c844710b5b2d51b438e7992e1e69372" Workload="ci--4081.2.1--a--a2790899e3-k8s-coredns--76f75df574--h7l4p-eth0" Dec 13 01:28:17.353157 containerd[1811]: 2024-12-13 01:28:17.348 [INFO][5888] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="165b55e6a7a58a195b5814a22ddbbb477c844710b5b2d51b438e7992e1e69372" HandleID="k8s-pod-network.165b55e6a7a58a195b5814a22ddbbb477c844710b5b2d51b438e7992e1e69372" Workload="ci--4081.2.1--a--a2790899e3-k8s-coredns--76f75df574--h7l4p-eth0" Dec 13 01:28:17.353157 containerd[1811]: 2024-12-13 01:28:17.349 [INFO][5888] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:17.353157 containerd[1811]: 2024-12-13 01:28:17.351 [INFO][5882] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="165b55e6a7a58a195b5814a22ddbbb477c844710b5b2d51b438e7992e1e69372" Dec 13 01:28:17.353592 containerd[1811]: time="2024-12-13T01:28:17.353238553Z" level=info msg="TearDown network for sandbox \"165b55e6a7a58a195b5814a22ddbbb477c844710b5b2d51b438e7992e1e69372\" successfully" Dec 13 01:28:17.353592 containerd[1811]: time="2024-12-13T01:28:17.353280353Z" level=info msg="StopPodSandbox for \"165b55e6a7a58a195b5814a22ddbbb477c844710b5b2d51b438e7992e1e69372\" returns successfully" Dec 13 01:28:17.354049 containerd[1811]: time="2024-12-13T01:28:17.354022438Z" level=info msg="RemovePodSandbox for \"165b55e6a7a58a195b5814a22ddbbb477c844710b5b2d51b438e7992e1e69372\"" Dec 13 01:28:17.354099 containerd[1811]: time="2024-12-13T01:28:17.354070478Z" level=info msg="Forcibly stopping sandbox \"165b55e6a7a58a195b5814a22ddbbb477c844710b5b2d51b438e7992e1e69372\"" Dec 13 01:28:17.428595 containerd[1811]: 2024-12-13 01:28:17.391 [WARNING][5906] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="165b55e6a7a58a195b5814a22ddbbb477c844710b5b2d51b438e7992e1e69372" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--a2790899e3-k8s-coredns--76f75df574--h7l4p-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"615b4d62-a251-4f67-a6ae-4331125f9266", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 27, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-a2790899e3", ContainerID:"931826f6934cc4937ad3ec70d28a0385ccafee6b640de96becfe900844eebef8", Pod:"coredns-76f75df574-h7l4p", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.79.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidbf8d2ccca6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:17.428595 containerd[1811]: 2024-12-13 01:28:17.391 [INFO][5906] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="165b55e6a7a58a195b5814a22ddbbb477c844710b5b2d51b438e7992e1e69372" Dec 13 01:28:17.428595 containerd[1811]: 2024-12-13 01:28:17.391 [INFO][5906] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="165b55e6a7a58a195b5814a22ddbbb477c844710b5b2d51b438e7992e1e69372" iface="eth0" netns="" Dec 13 01:28:17.428595 containerd[1811]: 2024-12-13 01:28:17.391 [INFO][5906] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="165b55e6a7a58a195b5814a22ddbbb477c844710b5b2d51b438e7992e1e69372" Dec 13 01:28:17.428595 containerd[1811]: 2024-12-13 01:28:17.391 [INFO][5906] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="165b55e6a7a58a195b5814a22ddbbb477c844710b5b2d51b438e7992e1e69372" Dec 13 01:28:17.428595 containerd[1811]: 2024-12-13 01:28:17.414 [INFO][5912] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="165b55e6a7a58a195b5814a22ddbbb477c844710b5b2d51b438e7992e1e69372" HandleID="k8s-pod-network.165b55e6a7a58a195b5814a22ddbbb477c844710b5b2d51b438e7992e1e69372" Workload="ci--4081.2.1--a--a2790899e3-k8s-coredns--76f75df574--h7l4p-eth0" Dec 13 01:28:17.428595 containerd[1811]: 2024-12-13 01:28:17.414 [INFO][5912] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:17.428595 containerd[1811]: 2024-12-13 01:28:17.414 [INFO][5912] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:17.428595 containerd[1811]: 2024-12-13 01:28:17.423 [WARNING][5912] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="165b55e6a7a58a195b5814a22ddbbb477c844710b5b2d51b438e7992e1e69372" HandleID="k8s-pod-network.165b55e6a7a58a195b5814a22ddbbb477c844710b5b2d51b438e7992e1e69372" Workload="ci--4081.2.1--a--a2790899e3-k8s-coredns--76f75df574--h7l4p-eth0" Dec 13 01:28:17.428595 containerd[1811]: 2024-12-13 01:28:17.423 [INFO][5912] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="165b55e6a7a58a195b5814a22ddbbb477c844710b5b2d51b438e7992e1e69372" HandleID="k8s-pod-network.165b55e6a7a58a195b5814a22ddbbb477c844710b5b2d51b438e7992e1e69372" Workload="ci--4081.2.1--a--a2790899e3-k8s-coredns--76f75df574--h7l4p-eth0" Dec 13 01:28:17.428595 containerd[1811]: 2024-12-13 01:28:17.424 [INFO][5912] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:17.428595 containerd[1811]: 2024-12-13 01:28:17.426 [INFO][5906] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="165b55e6a7a58a195b5814a22ddbbb477c844710b5b2d51b438e7992e1e69372" Dec 13 01:28:17.429494 containerd[1811]: time="2024-12-13T01:28:17.428575771Z" level=info msg="TearDown network for sandbox \"165b55e6a7a58a195b5814a22ddbbb477c844710b5b2d51b438e7992e1e69372\" successfully" Dec 13 01:28:17.436575 containerd[1811]: time="2024-12-13T01:28:17.436524423Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"165b55e6a7a58a195b5814a22ddbbb477c844710b5b2d51b438e7992e1e69372\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:28:17.436685 containerd[1811]: time="2024-12-13T01:28:17.436638984Z" level=info msg="RemovePodSandbox \"165b55e6a7a58a195b5814a22ddbbb477c844710b5b2d51b438e7992e1e69372\" returns successfully" Dec 13 01:28:17.437270 containerd[1811]: time="2024-12-13T01:28:17.437172188Z" level=info msg="StopPodSandbox for \"fc5fef736f525172af5a285d617822b0d39d8c6a76efa09d2182ea831f81aaa8\"" Dec 13 01:28:17.511974 containerd[1811]: 2024-12-13 01:28:17.477 [WARNING][5930] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fc5fef736f525172af5a285d617822b0d39d8c6a76efa09d2182ea831f81aaa8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--a2790899e3-k8s-calico--kube--controllers--64b868bd8--nmv69-eth0", GenerateName:"calico-kube-controllers-64b868bd8-", Namespace:"calico-system", SelfLink:"", UID:"56a8139e-d217-4073-b397-5d26c40f4540", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 27, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64b868bd8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-a2790899e3", ContainerID:"c57bbb7ed962a5a08f8cfaf9d18c781fd09ba0e341e9632003caffc6a7e49b85", Pod:"calico-kube-controllers-64b868bd8-nmv69", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.79.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali247199f9299", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:17.511974 containerd[1811]: 2024-12-13 01:28:17.477 [INFO][5930] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fc5fef736f525172af5a285d617822b0d39d8c6a76efa09d2182ea831f81aaa8" Dec 13 01:28:17.511974 containerd[1811]: 2024-12-13 01:28:17.477 [INFO][5930] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fc5fef736f525172af5a285d617822b0d39d8c6a76efa09d2182ea831f81aaa8" iface="eth0" netns="" Dec 13 01:28:17.511974 containerd[1811]: 2024-12-13 01:28:17.477 [INFO][5930] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fc5fef736f525172af5a285d617822b0d39d8c6a76efa09d2182ea831f81aaa8" Dec 13 01:28:17.511974 containerd[1811]: 2024-12-13 01:28:17.477 [INFO][5930] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fc5fef736f525172af5a285d617822b0d39d8c6a76efa09d2182ea831f81aaa8" Dec 13 01:28:17.511974 containerd[1811]: 2024-12-13 01:28:17.497 [INFO][5937] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fc5fef736f525172af5a285d617822b0d39d8c6a76efa09d2182ea831f81aaa8" HandleID="k8s-pod-network.fc5fef736f525172af5a285d617822b0d39d8c6a76efa09d2182ea831f81aaa8" Workload="ci--4081.2.1--a--a2790899e3-k8s-calico--kube--controllers--64b868bd8--nmv69-eth0" Dec 13 01:28:17.511974 containerd[1811]: 2024-12-13 01:28:17.497 [INFO][5937] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:17.511974 containerd[1811]: 2024-12-13 01:28:17.497 [INFO][5937] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:17.511974 containerd[1811]: 2024-12-13 01:28:17.506 [WARNING][5937] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fc5fef736f525172af5a285d617822b0d39d8c6a76efa09d2182ea831f81aaa8" HandleID="k8s-pod-network.fc5fef736f525172af5a285d617822b0d39d8c6a76efa09d2182ea831f81aaa8" Workload="ci--4081.2.1--a--a2790899e3-k8s-calico--kube--controllers--64b868bd8--nmv69-eth0" Dec 13 01:28:17.511974 containerd[1811]: 2024-12-13 01:28:17.506 [INFO][5937] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fc5fef736f525172af5a285d617822b0d39d8c6a76efa09d2182ea831f81aaa8" HandleID="k8s-pod-network.fc5fef736f525172af5a285d617822b0d39d8c6a76efa09d2182ea831f81aaa8" Workload="ci--4081.2.1--a--a2790899e3-k8s-calico--kube--controllers--64b868bd8--nmv69-eth0" Dec 13 01:28:17.511974 containerd[1811]: 2024-12-13 01:28:17.507 [INFO][5937] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:17.511974 containerd[1811]: 2024-12-13 01:28:17.509 [INFO][5930] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fc5fef736f525172af5a285d617822b0d39d8c6a76efa09d2182ea831f81aaa8" Dec 13 01:28:17.511974 containerd[1811]: time="2024-12-13T01:28:17.511854521Z" level=info msg="TearDown network for sandbox \"fc5fef736f525172af5a285d617822b0d39d8c6a76efa09d2182ea831f81aaa8\" successfully" Dec 13 01:28:17.511974 containerd[1811]: time="2024-12-13T01:28:17.511880242Z" level=info msg="StopPodSandbox for \"fc5fef736f525172af5a285d617822b0d39d8c6a76efa09d2182ea831f81aaa8\" returns successfully" Dec 13 01:28:17.513291 containerd[1811]: time="2024-12-13T01:28:17.512623086Z" level=info msg="RemovePodSandbox for \"fc5fef736f525172af5a285d617822b0d39d8c6a76efa09d2182ea831f81aaa8\"" Dec 13 01:28:17.513291 containerd[1811]: time="2024-12-13T01:28:17.512657087Z" level=info msg="Forcibly stopping sandbox \"fc5fef736f525172af5a285d617822b0d39d8c6a76efa09d2182ea831f81aaa8\"" Dec 13 01:28:17.584209 containerd[1811]: 2024-12-13 01:28:17.546 [WARNING][5955] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fc5fef736f525172af5a285d617822b0d39d8c6a76efa09d2182ea831f81aaa8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--a2790899e3-k8s-calico--kube--controllers--64b868bd8--nmv69-eth0", GenerateName:"calico-kube-controllers-64b868bd8-", Namespace:"calico-system", SelfLink:"", UID:"56a8139e-d217-4073-b397-5d26c40f4540", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 27, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64b868bd8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-a2790899e3", ContainerID:"c57bbb7ed962a5a08f8cfaf9d18c781fd09ba0e341e9632003caffc6a7e49b85", Pod:"calico-kube-controllers-64b868bd8-nmv69", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.79.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali247199f9299", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:28:17.584209 containerd[1811]: 2024-12-13 01:28:17.547 [INFO][5955] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fc5fef736f525172af5a285d617822b0d39d8c6a76efa09d2182ea831f81aaa8" Dec 13 01:28:17.584209 containerd[1811]: 2024-12-13 01:28:17.547 [INFO][5955] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fc5fef736f525172af5a285d617822b0d39d8c6a76efa09d2182ea831f81aaa8" iface="eth0" netns="" Dec 13 01:28:17.584209 containerd[1811]: 2024-12-13 01:28:17.547 [INFO][5955] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fc5fef736f525172af5a285d617822b0d39d8c6a76efa09d2182ea831f81aaa8" Dec 13 01:28:17.584209 containerd[1811]: 2024-12-13 01:28:17.547 [INFO][5955] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fc5fef736f525172af5a285d617822b0d39d8c6a76efa09d2182ea831f81aaa8" Dec 13 01:28:17.584209 containerd[1811]: 2024-12-13 01:28:17.568 [INFO][5961] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fc5fef736f525172af5a285d617822b0d39d8c6a76efa09d2182ea831f81aaa8" HandleID="k8s-pod-network.fc5fef736f525172af5a285d617822b0d39d8c6a76efa09d2182ea831f81aaa8" Workload="ci--4081.2.1--a--a2790899e3-k8s-calico--kube--controllers--64b868bd8--nmv69-eth0" Dec 13 01:28:17.584209 containerd[1811]: 2024-12-13 01:28:17.568 [INFO][5961] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:28:17.584209 containerd[1811]: 2024-12-13 01:28:17.568 [INFO][5961] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:28:17.584209 containerd[1811]: 2024-12-13 01:28:17.577 [WARNING][5961] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fc5fef736f525172af5a285d617822b0d39d8c6a76efa09d2182ea831f81aaa8" HandleID="k8s-pod-network.fc5fef736f525172af5a285d617822b0d39d8c6a76efa09d2182ea831f81aaa8" Workload="ci--4081.2.1--a--a2790899e3-k8s-calico--kube--controllers--64b868bd8--nmv69-eth0" Dec 13 01:28:17.584209 containerd[1811]: 2024-12-13 01:28:17.578 [INFO][5961] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fc5fef736f525172af5a285d617822b0d39d8c6a76efa09d2182ea831f81aaa8" HandleID="k8s-pod-network.fc5fef736f525172af5a285d617822b0d39d8c6a76efa09d2182ea831f81aaa8" Workload="ci--4081.2.1--a--a2790899e3-k8s-calico--kube--controllers--64b868bd8--nmv69-eth0" Dec 13 01:28:17.584209 containerd[1811]: 2024-12-13 01:28:17.580 [INFO][5961] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:28:17.584209 containerd[1811]: 2024-12-13 01:28:17.582 [INFO][5955] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fc5fef736f525172af5a285d617822b0d39d8c6a76efa09d2182ea831f81aaa8" Dec 13 01:28:17.584209 containerd[1811]: time="2024-12-13T01:28:17.583585316Z" level=info msg="TearDown network for sandbox \"fc5fef736f525172af5a285d617822b0d39d8c6a76efa09d2182ea831f81aaa8\" successfully" Dec 13 01:28:17.591193 containerd[1811]: time="2024-12-13T01:28:17.591062085Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fc5fef736f525172af5a285d617822b0d39d8c6a76efa09d2182ea831f81aaa8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:28:17.591409 containerd[1811]: time="2024-12-13T01:28:17.591276886Z" level=info msg="RemovePodSandbox \"fc5fef736f525172af5a285d617822b0d39d8c6a76efa09d2182ea831f81aaa8\" returns successfully" Dec 13 01:28:24.791858 kubelet[3412]: I1213 01:28:24.790859 3412 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:28:28.127828 kubelet[3412]: I1213 01:28:28.127262 3412 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:29:12.924436 systemd[1]: Started sshd@7-10.200.20.34:22-10.200.16.10:42540.service - OpenSSH per-connection server daemon (10.200.16.10:42540). Dec 13 01:29:13.363708 sshd[6101]: Accepted publickey for core from 10.200.16.10 port 42540 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:29:13.365914 sshd[6101]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:29:13.370665 systemd-logind[1787]: New session 10 of user core. Dec 13 01:29:13.377472 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 01:29:13.755338 sshd[6101]: pam_unix(sshd:session): session closed for user core Dec 13 01:29:13.758416 systemd-logind[1787]: Session 10 logged out. Waiting for processes to exit. Dec 13 01:29:13.761202 systemd[1]: sshd@7-10.200.20.34:22-10.200.16.10:42540.service: Deactivated successfully. Dec 13 01:29:13.762938 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 01:29:13.765424 systemd-logind[1787]: Removed session 10. Dec 13 01:29:18.835416 systemd[1]: Started sshd@8-10.200.20.34:22-10.200.16.10:43078.service - OpenSSH per-connection server daemon (10.200.16.10:43078). Dec 13 01:29:19.278430 sshd[6137]: Accepted publickey for core from 10.200.16.10 port 43078 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:29:19.279775 sshd[6137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:29:19.285891 systemd-logind[1787]: New session 11 of user core. Dec 13 01:29:19.287596 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 01:29:19.662821 sshd[6137]: pam_unix(sshd:session): session closed for user core Dec 13 01:29:19.667067 systemd[1]: sshd@8-10.200.20.34:22-10.200.16.10:43078.service: Deactivated successfully. Dec 13 01:29:19.667474 systemd-logind[1787]: Session 11 logged out. Waiting for processes to exit. Dec 13 01:29:19.671027 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 01:29:19.672457 systemd-logind[1787]: Removed session 11. Dec 13 01:29:24.741417 systemd[1]: Started sshd@9-10.200.20.34:22-10.200.16.10:43084.service - OpenSSH per-connection server daemon (10.200.16.10:43084). Dec 13 01:29:25.167662 sshd[6157]: Accepted publickey for core from 10.200.16.10 port 43084 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:29:25.169435 sshd[6157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:29:25.173633 systemd-logind[1787]: New session 12 of user core. Dec 13 01:29:25.181501 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 01:29:25.541710 sshd[6157]: pam_unix(sshd:session): session closed for user core Dec 13 01:29:25.546517 systemd[1]: sshd@9-10.200.20.34:22-10.200.16.10:43084.service: Deactivated successfully. Dec 13 01:29:25.549648 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 01:29:25.551023 systemd-logind[1787]: Session 12 logged out. Waiting for processes to exit. Dec 13 01:29:25.552067 systemd-logind[1787]: Removed session 12. Dec 13 01:29:25.617459 systemd[1]: Started sshd@10-10.200.20.34:22-10.200.16.10:43092.service - OpenSSH per-connection server daemon (10.200.16.10:43092). Dec 13 01:29:26.050354 sshd[6172]: Accepted publickey for core from 10.200.16.10 port 43092 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:29:26.051718 sshd[6172]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:29:26.056506 systemd-logind[1787]: New session 13 of user core. Dec 13 01:29:26.060572 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 01:29:26.463910 sshd[6172]: pam_unix(sshd:session): session closed for user core Dec 13 01:29:26.467145 systemd[1]: sshd@10-10.200.20.34:22-10.200.16.10:43092.service: Deactivated successfully. Dec 13 01:29:26.472601 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 01:29:26.472639 systemd-logind[1787]: Session 13 logged out. Waiting for processes to exit. Dec 13 01:29:26.476093 systemd-logind[1787]: Removed session 13. Dec 13 01:29:26.542543 systemd[1]: Started sshd@11-10.200.20.34:22-10.200.16.10:43100.service - OpenSSH per-connection server daemon (10.200.16.10:43100). Dec 13 01:29:26.986828 sshd[6185]: Accepted publickey for core from 10.200.16.10 port 43100 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:29:26.988148 sshd[6185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:29:26.992295 systemd-logind[1787]: New session 14 of user core. Dec 13 01:29:26.997467 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 01:29:27.379392 sshd[6185]: pam_unix(sshd:session): session closed for user core Dec 13 01:29:27.383782 systemd[1]: sshd@11-10.200.20.34:22-10.200.16.10:43100.service: Deactivated successfully. Dec 13 01:29:27.387436 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 01:29:27.388361 systemd-logind[1787]: Session 14 logged out. Waiting for processes to exit. Dec 13 01:29:27.389813 systemd-logind[1787]: Removed session 14. Dec 13 01:29:32.456502 systemd[1]: Started sshd@12-10.200.20.34:22-10.200.16.10:45574.service - OpenSSH per-connection server daemon (10.200.16.10:45574). Dec 13 01:29:32.880437 sshd[6227]: Accepted publickey for core from 10.200.16.10 port 45574 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:29:32.881981 sshd[6227]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:29:32.886022 systemd-logind[1787]: New session 15 of user core. Dec 13 01:29:32.894622 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 01:29:33.273739 sshd[6227]: pam_unix(sshd:session): session closed for user core Dec 13 01:29:33.277047 systemd[1]: sshd@12-10.200.20.34:22-10.200.16.10:45574.service: Deactivated successfully. Dec 13 01:29:33.280497 systemd-logind[1787]: Session 15 logged out. Waiting for processes to exit. Dec 13 01:29:33.280885 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 01:29:33.282036 systemd-logind[1787]: Removed session 15. Dec 13 01:29:38.348596 systemd[1]: Started sshd@13-10.200.20.34:22-10.200.16.10:45588.service - OpenSSH per-connection server daemon (10.200.16.10:45588). Dec 13 01:29:38.775130 sshd[6258]: Accepted publickey for core from 10.200.16.10 port 45588 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:29:38.776531 sshd[6258]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:29:38.780425 systemd-logind[1787]: New session 16 of user core. Dec 13 01:29:38.786493 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 01:29:39.172195 sshd[6258]: pam_unix(sshd:session): session closed for user core Dec 13 01:29:39.175143 systemd[1]: sshd@13-10.200.20.34:22-10.200.16.10:45588.service: Deactivated successfully. Dec 13 01:29:39.179389 systemd-logind[1787]: Session 16 logged out. Waiting for processes to exit. Dec 13 01:29:39.180069 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 01:29:39.181237 systemd-logind[1787]: Removed session 16. Dec 13 01:29:44.245424 systemd[1]: Started sshd@14-10.200.20.34:22-10.200.16.10:48910.service - OpenSSH per-connection server daemon (10.200.16.10:48910). Dec 13 01:29:44.671172 sshd[6291]: Accepted publickey for core from 10.200.16.10 port 48910 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:29:44.672781 sshd[6291]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:29:44.677225 systemd-logind[1787]: New session 17 of user core. Dec 13 01:29:44.682435 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 01:29:45.068545 sshd[6291]: pam_unix(sshd:session): session closed for user core Dec 13 01:29:45.077632 systemd[1]: sshd@14-10.200.20.34:22-10.200.16.10:48910.service: Deactivated successfully. Dec 13 01:29:45.081419 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 01:29:45.082131 systemd-logind[1787]: Session 17 logged out. Waiting for processes to exit. Dec 13 01:29:45.083275 systemd-logind[1787]: Removed session 17. Dec 13 01:29:45.142409 systemd[1]: Started sshd@15-10.200.20.34:22-10.200.16.10:48924.service - OpenSSH per-connection server daemon (10.200.16.10:48924). Dec 13 01:29:45.575100 sshd[6305]: Accepted publickey for core from 10.200.16.10 port 48924 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:29:45.576464 sshd[6305]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:29:45.580964 systemd-logind[1787]: New session 18 of user core. Dec 13 01:29:45.588410 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 01:29:46.101951 sshd[6305]: pam_unix(sshd:session): session closed for user core Dec 13 01:29:46.106112 systemd-logind[1787]: Session 18 logged out. Waiting for processes to exit. Dec 13 01:29:46.107312 systemd[1]: sshd@15-10.200.20.34:22-10.200.16.10:48924.service: Deactivated successfully. Dec 13 01:29:46.111265 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 01:29:46.112243 systemd-logind[1787]: Removed session 18. Dec 13 01:29:46.182567 systemd[1]: Started sshd@16-10.200.20.34:22-10.200.16.10:48932.service - OpenSSH per-connection server daemon (10.200.16.10:48932). Dec 13 01:29:46.631684 sshd[6316]: Accepted publickey for core from 10.200.16.10 port 48932 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:29:46.633031 sshd[6316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:29:46.637256 systemd-logind[1787]: New session 19 of user core. Dec 13 01:29:46.641473 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 01:29:48.723875 sshd[6316]: pam_unix(sshd:session): session closed for user core Dec 13 01:29:48.727287 systemd[1]: sshd@16-10.200.20.34:22-10.200.16.10:48932.service: Deactivated successfully. Dec 13 01:29:48.732424 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 01:29:48.734211 systemd-logind[1787]: Session 19 logged out. Waiting for processes to exit. Dec 13 01:29:48.735655 systemd-logind[1787]: Removed session 19. Dec 13 01:29:48.797428 systemd[1]: Started sshd@17-10.200.20.34:22-10.200.16.10:54240.service - OpenSSH per-connection server daemon (10.200.16.10:54240). Dec 13 01:29:49.230272 sshd[6354]: Accepted publickey for core from 10.200.16.10 port 54240 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:29:49.231547 sshd[6354]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:29:49.235995 systemd-logind[1787]: New session 20 of user core. Dec 13 01:29:49.245470 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 13 01:29:49.712635 sshd[6354]: pam_unix(sshd:session): session closed for user core Dec 13 01:29:49.715595 systemd[1]: sshd@17-10.200.20.34:22-10.200.16.10:54240.service: Deactivated successfully. Dec 13 01:29:49.719334 systemd-logind[1787]: Session 20 logged out. Waiting for processes to exit. Dec 13 01:29:49.719914 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 01:29:49.721525 systemd-logind[1787]: Removed session 20. Dec 13 01:29:49.792478 systemd[1]: Started sshd@18-10.200.20.34:22-10.200.16.10:54256.service - OpenSSH per-connection server daemon (10.200.16.10:54256). Dec 13 01:29:50.224223 sshd[6366]: Accepted publickey for core from 10.200.16.10 port 54256 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:29:50.225530 sshd[6366]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:29:50.229857 systemd-logind[1787]: New session 21 of user core. Dec 13 01:29:50.236389 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 13 01:29:50.600388 sshd[6366]: pam_unix(sshd:session): session closed for user core Dec 13 01:29:50.604416 systemd-logind[1787]: Session 21 logged out. Waiting for processes to exit. Dec 13 01:29:50.605371 systemd[1]: sshd@18-10.200.20.34:22-10.200.16.10:54256.service: Deactivated successfully. Dec 13 01:29:50.608590 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 01:29:50.610413 systemd-logind[1787]: Removed session 21. Dec 13 01:29:55.676419 systemd[1]: Started sshd@19-10.200.20.34:22-10.200.16.10:54268.service - OpenSSH per-connection server daemon (10.200.16.10:54268). Dec 13 01:29:56.106883 sshd[6383]: Accepted publickey for core from 10.200.16.10 port 54268 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:29:56.108697 sshd[6383]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:29:56.115247 systemd-logind[1787]: New session 22 of user core. Dec 13 01:29:56.120749 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 13 01:29:56.482816 sshd[6383]: pam_unix(sshd:session): session closed for user core Dec 13 01:29:56.486546 systemd[1]: sshd@19-10.200.20.34:22-10.200.16.10:54268.service: Deactivated successfully. Dec 13 01:29:56.490255 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 01:29:56.490531 systemd-logind[1787]: Session 22 logged out. Waiting for processes to exit. Dec 13 01:29:56.493304 systemd-logind[1787]: Removed session 22. Dec 13 01:30:01.558434 systemd[1]: Started sshd@20-10.200.20.34:22-10.200.16.10:56384.service - OpenSSH per-connection server daemon (10.200.16.10:56384). Dec 13 01:30:01.989873 sshd[6418]: Accepted publickey for core from 10.200.16.10 port 56384 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:30:01.991202 sshd[6418]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:30:01.995295 systemd-logind[1787]: New session 23 of user core. Dec 13 01:30:02.002477 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 13 01:30:02.365424 sshd[6418]: pam_unix(sshd:session): session closed for user core Dec 13 01:30:02.368545 systemd[1]: sshd@20-10.200.20.34:22-10.200.16.10:56384.service: Deactivated successfully. Dec 13 01:30:02.372648 systemd-logind[1787]: Session 23 logged out. Waiting for processes to exit. Dec 13 01:30:02.373329 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 01:30:02.374963 systemd-logind[1787]: Removed session 23. Dec 13 01:30:07.443425 systemd[1]: Started sshd@21-10.200.20.34:22-10.200.16.10:56400.service - OpenSSH per-connection server daemon (10.200.16.10:56400). Dec 13 01:30:07.885581 sshd[6434]: Accepted publickey for core from 10.200.16.10 port 56400 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:30:07.886913 sshd[6434]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:30:07.891106 systemd-logind[1787]: New session 24 of user core. Dec 13 01:30:07.901548 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 13 01:30:08.273531 sshd[6434]: pam_unix(sshd:session): session closed for user core Dec 13 01:30:08.276821 systemd[1]: sshd@21-10.200.20.34:22-10.200.16.10:56400.service: Deactivated successfully. Dec 13 01:30:08.280833 systemd-logind[1787]: Session 24 logged out. Waiting for processes to exit. Dec 13 01:30:08.281390 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 01:30:08.282541 systemd-logind[1787]: Removed session 24. Dec 13 01:30:13.348436 systemd[1]: Started sshd@22-10.200.20.34:22-10.200.16.10:59284.service - OpenSSH per-connection server daemon (10.200.16.10:59284). Dec 13 01:30:13.782647 sshd[6448]: Accepted publickey for core from 10.200.16.10 port 59284 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:30:13.784117 sshd[6448]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:30:13.788229 systemd-logind[1787]: New session 25 of user core. Dec 13 01:30:13.794500 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 13 01:30:14.158263 sshd[6448]: pam_unix(sshd:session): session closed for user core Dec 13 01:30:14.161554 systemd[1]: sshd@22-10.200.20.34:22-10.200.16.10:59284.service: Deactivated successfully. Dec 13 01:30:14.166754 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 01:30:14.167606 systemd-logind[1787]: Session 25 logged out. Waiting for processes to exit. Dec 13 01:30:14.169280 systemd-logind[1787]: Removed session 25. Dec 13 01:30:19.232396 systemd[1]: Started sshd@23-10.200.20.34:22-10.200.16.10:51688.service - OpenSSH per-connection server daemon (10.200.16.10:51688). Dec 13 01:30:19.659734 sshd[6484]: Accepted publickey for core from 10.200.16.10 port 51688 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:30:19.661202 sshd[6484]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:30:19.665751 systemd-logind[1787]: New session 26 of user core. Dec 13 01:30:19.671747 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 13 01:30:20.053377 sshd[6484]: pam_unix(sshd:session): session closed for user core Dec 13 01:30:20.057191 systemd[1]: sshd@23-10.200.20.34:22-10.200.16.10:51688.service: Deactivated successfully. Dec 13 01:30:20.060298 systemd-logind[1787]: Session 26 logged out. Waiting for processes to exit. Dec 13 01:30:20.060793 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 01:30:20.063120 systemd-logind[1787]: Removed session 26. Dec 13 01:30:25.130404 systemd[1]: Started sshd@24-10.200.20.34:22-10.200.16.10:51696.service - OpenSSH per-connection server daemon (10.200.16.10:51696). Dec 13 01:30:25.555531 sshd[6499]: Accepted publickey for core from 10.200.16.10 port 51696 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:30:25.556879 sshd[6499]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:30:25.562476 systemd-logind[1787]: New session 27 of user core. Dec 13 01:30:25.566202 systemd[1]: Started session-27.scope - Session 27 of User core. Dec 13 01:30:25.931824 sshd[6499]: pam_unix(sshd:session): session closed for user core Dec 13 01:30:25.934924 systemd[1]: sshd@24-10.200.20.34:22-10.200.16.10:51696.service: Deactivated successfully. Dec 13 01:30:25.938793 systemd[1]: session-27.scope: Deactivated successfully. Dec 13 01:30:25.941356 systemd-logind[1787]: Session 27 logged out. Waiting for processes to exit. Dec 13 01:30:25.942437 systemd-logind[1787]: Removed session 27.